In a recent issue of Philosophy Now, Christian philosopher Grant Bartley argues “Why Physicalism is Wrong.” In which he exemplifies why it is the critics of physicalism who are wrong. Because Bartley commits basic fallacies in understanding the issue. Which are actually common fallacies. Especially among Christians. Here’s why Bartley is wrong. And why it matters…

What Is Mind-Brain Physicalism?

Mind-brain physicalism is the theory that “states and processes of the mind are identical to states and processes of the brain.” Without remainder. Meaning, once you have all the physical parts in place, and set them in motion, every phenomenon of mind is produced. No “extra stuff” has to be added to make it work. In Sense and Goodness without God I give several reasons why this theory, though not yet proven scientifically, is almost certainly correct (pp. 135-60). Since then, good defenses of it have been published by Melnyk and Churchland. And even some Christians now are starting to concede the point.

One of the most famous and popular ways to argue over this is a thought experiment about zombies. Not flesh eating walking corpses. But the conceptual possibility of a person who has all the working parts of a brain identically to yours and who behaves in every way identical to you—yet experiences no phenomena of consciousness. They experience nothing. If such a person is logically possible, then what we call qualia (the peculiar quality of “what it is like” to experience things, e.g. what the color red “looks like” being the common example) cannot be explained by physics. Rather, some “extra thing” must exist or operate, to let us experience things, and thus “be conscious” in the sense we commonly mean (rather than merely act as if we were conscious).

Christians of course want this “extra thing” to be the soul—combined with the created laws of God (“thou shalt experience a color when a certain bundle of photons agitates your eyeball”). But those guesses are explanatorily useless (they predict nothing and are wholly untestable), probably incoherent (it’s not clear how either souls or gods actually solve the problem of explaining why qualia exist and manifest only in certain ways), and contrary to precedent (everything about the mind so far that we’ve been able to test yet thought couldn’t be physical, has so far always turned out to be physical). I dismantled the Argument from Qualia (“Qualia; therefore God”) in The End of Christianity. And I briefed that already in my Reply to Plantinga. So I won’t bother with it now. Here I’m only concerned with the competing theories of mind: physicalism vs. ensoulment (or some other variety of explanatory “dualism“). Not with whether any of this argues for or against God (though really, the evidence argues against God…once we put all the evidence back in, that Christians leave out).

But there’s a kink in thought experiments. Because they are conceptual in result, they must be conceptually consistent. You are failing to conduct a thought experiment correctly if you don’t do what the experiment actually tells you to do. Searle’s infamous Chinese Room is an example of a philosopher failing to conduct the actual experiment he himself described, and thereby getting a completely bogus result out of it. Pro-tip: the man in the room is only analogous to the circulatory system…and that circulatory systems aren’t conscious, is not a revelation—whereas how we must conceive of the book in the room to meet Searle’s own terms, ends up making the book conscious, proving nothing about consciousness…other than that books can be conscious! (See my discussion of Searle’s fatal mistakes here in Sense and Goodness without God, pp. 139-44.) Another is Mary’s Room, in which the usual mistake is to forget that if Mary has all propositional knowledge, then she already has a complete set of instructions for how to install and activate whatever neurons in her brain are required for her to experience any color she wants. The thought experiment, as usually carried out—incorrectly—confuses process with description, and cognitive with nonconitive knowledge (again see Sense and Goodness, pp. 33, 179, etc.). Not all knowledge is propositional. That does not mean non-propositional knowledge can’t be reductively physical.

Philosophers will make the same mistakes with the zombies thing.

As I wrote in my Reply to Plantinga:

This is similar to why philosophical zombies are logically impossible. To be one, a person must be neurophysically identical to a nonzombie, yet not experience anything when thinking and perceiving (they see no “color red” and hear no voice when asked a question and so on), and yet always behave in exactly the same way. Those three conditions cannot logically cohere. Ever. For example, if you ask the zombie to describe the qualia of its experience (“Do you see the color red? What does it look like? Do you hear my voice? What does my voice sound like?”), it either has to behave differently (by reporting that it doesn’t), or it has to lie (by claiming it does, when in fact it doesn’t), which is also behaving differently, but more importantly, entails a different neurophysical activity: because the deception-centers of the brain have to be activated (and that will be observable on a brain-scan of suitable resolution); but also, their brain has to be structured to be a liar in that circumstance, which will physically differ from a person whose brain is structured to tell the truth when asked the same questions (and those structural differences will be physically observable to anyone with instruments of sufficient precision). To which one might say, “Well, maybe the zombie will lie and not know it’s lying.” Right. And how do you know that is not exactly what you are doing? If you genuinely (yet falsely) believe you are seeing the color red, how is that any different from just actually seeing the color red? In the end, there is no difference between you and your philosophical zombie counterpart […].

This point was illustrated by one of the most important papers yet written on the subject, “Sniffing the Camembert: On the Conceivability of Zombies” by Allin Cottrell, published in the Journal of Consciousness Studies 6.1 (1999): 4-12. He forces the reader to actually conduct the experiment. And when you really do, taking into account what you have to to meet the actual terms of the experiment, the answer seems to be that zombies are impossible, not evidence against physicalism. Qualia appear to be an unavoidable and inalienable product of a certain type of information processing. You can’t make a machine that behaves consciously (and thus is capable of all the remarkable things consciousness allows an animal to do), that doesn’t qualitatively experience what it is processing. The very notion is incoherent. “My hand is in pain and I feel nothing” is simply not an intelligible sentence.

The significance is clear. Apart from the whole gods and worldviews thing—can physics explain everything, or do we need the supernatural?—it matters simply in respect to the scientific understanding of ourselves, of other animals, and of the general AI we will inevitably create. Psychic powers? Telepathy? Reincarnation? Life after death? You’d better have a physical model that we can test. Otherwise, nope. And it matters in respect to the future virtual worlds we will inevitably be able to live in—what colors can we program ourselves to see then, and what emotions can we program ourselves to feel? And how will we program that? What qualia can we then enjoy, that were impossible in our present brains, and why? And it does matter for deciding what research we should be aiming at to solve the scientific question (one of the last great questions science has to answer) of why consciousness exists, and why it has the specific properties it does, instead of others.

Why, after all, does red look “red”? Why do we “see” red instead of taste it? Why do we smell cinnamon instead of hear it? Why does cinnamon smell like cinnamon and not like fish? We already know some things about this. For example, for some people, we know red doesn’t look red. It looks green. And they don’t know the difference. They are qualia inverted: people with genes for both versions of color blindness (a statistical inevitability) will have their red cones wired to their green circuits, and vice versa (see Martine Nida-Rümelin, “Pseudonormal Vision: An Actual Case of Qualia Inversion?” in Philosophical Studies 82.2 (May 1996): 145-57). But because they will only ever have heard us call green things red, they don’t know they are actually experiencing a different color than we are when we both say we are seeing “red.” We also know lots of people have differently wired qualia responses (seeing sounds, hearing colors, tasting shapes, and so on). It’s called synesthesia. And of course animals have sensory systems, and sensory ranges, that we don’t—they must experience qualia wholly alien to us. So could we. If we were physically wired differently.

Why Haven’t We Solved This Yet?

But if physicalism is true, shouldn’t science have proved it by now?

No. That we haven’t done that, is not because physicalism is false. It’s because we don’t have the means to get there yet. In short, the evidence that we haven’t gotten there yet, is 100% expected on both theories: that physicalism is true; and that physicalism is false. It’s therefore not evidence of either.

What we need to answer these questions is better instruments. Just as we couldn’t learn of the Big Bang without better instruments allowing us to see more detail in the cosmos farther out and in more ways (e.g. spectrum analysis; radio telescopes), we can’t really understand consciousness without instruments capable of resolving brain activity at the nearly atomic scale. Active brain scans (like Functional MRI) have nowhere near the required resolution. They can’t even see neurons, much less observe the electrical activity across specific synapses, even less observe any chemical activity involved in the processing—for example, to effect memory do neurons add methyl groups to their nuclear DNA causing different computational physics in the neuron? Needless to say, we are nowhere near being able to see even the physical synaptic structure of whole brains, much less know what the input and output signals are in every neuron or neural circuit, even less what physical structures compute the output from that input. Our brains aren’t digital electric computers. They are chemo-electric. They operate on analog principles, and combine chemical computation along with electrical signaling. Brains are therefore not Turing machines; although a Turing machine should be able to replicate the same information process, if we ever figure out what it is (Searle’s attempt to disprove this with his Chinese Room was a fallacious flop).

More likely we’ll get there first through AI. Which will be built in a completely different way from human brains. But we will be able to analyze every component of its processing and thus explain what specific processes generate what specific qualia. Because we will be able to configure its circuits however we want, and then ask it what it experiences (BTW, I hope we do this ethically by actually asking its permission and ensuring the experiments aren’t a torment; because such computers will be people in every moral sense of the term, so we should treat these AI the same way we now do all human test subjects in the sciences). Because building AI is, frankly, easier to conceive than inventing a scanning instrument capable of harmlessly observing the movement of every molecule and electron in a live human brain.

But let’s pretend for a moment we just invented that very instrument. What would we be able to do with it to start making headway on the qualia problem? First, we would be able to catalog what the physical difference is between different neural circuits and circuit networks that correlates with every distinct quale. We’ll know why one circuit makes us experience the color red, why another green; and we’ll know why one circuit makes us experience a smell instead of a color. It’s fairly certain this will be a structural difference (everything else we’ve found out about how the mind works has, and continually now for a century). It’s even more certain it will be a difference in information processing. In other words, one circuit will process information differently than the other, and that difference will cause a smell instead of a color, or seeing red instead of green. And it’s quite likely all smell circuits will share some structure in common, that makes them different from color circuits. We will then be able to peg what information process generates smells in general vs. colors, and then within that general difference, what variations of that information process distinguish different smells and colors from each other.

We’ll then know what information processes (what circuit structures) we could theoretically build (that aren’t in human brains) and thus explore the entire domain of all possible qualia (we’ll know if there is a finite number of colors experienceable, for example, or if the domain of possible color experience is literally boundless)—though likely we could only know what those “other” qualia are actually like by literally installing the circuits in someone’s brain and asking that subject what they then experience. Yet we will know some things about them: you could show us an alien circuit, and we could tell you it would produce a quale of smell and not a color. Or vice versa. Because we’d know what structural features smell circuits share that color circuits don’t.

So you might see how we’d then be able to start building a physical theory of qualia.

Dreams of a Complete Theory?

Could that process carry all the way to individual qualia? Could we get to a point where we understand the structural causes of qualitative experience for computational circuits well enough that if you show us an alien circuit, we can not only tell you it will produce a smell and not a color, but even what specific smell? Certainly for smells we know. But what about alien smells? Possibly, but it will take a good while to get there—because we cannot transmit qualia information propositionally, other than the same way we transmit things like how to ride a bicycle. Because qualia are a process. Like riding a bicycle is. I can give you a complete set of instructions for how to ride a bicycle. Every true proposition about it that could ever exist in the cosmos even. But you will not be able to ride a bike after hearing them. You would have to follow those instructions, and thus develop the skill. Then you’d know how to ride a bike.

The process of riding a bike is not cognitive knowledge. It’s noncognitive. We can encode it in a set of instructions and send it to your brain. But that won’t cause the wires in your brain to reorganize themselves into all the kinesiological circuits needed to ride. Even a complete set of instructions “to your brain” on how to do that, won’t do that. Because your brain doesn’t know how to follow such instructions. Our brains aren’t built to process sentences that way. Maybe someday we can. Like in The Matrix, Trinity’s team could rewire the neurons in her brain at a keystroke, so she instantly has all the neural structures needed to fly a helicopter. But right now, we aren’t built that way. Language is an add-on; not fundamental to how our brains work.

But notice even in that hypothetical Matrix example, they had to rewire Trinity’s neurons. Knowing how to fly a helicopter is not a set of sentences in a language. It’s a set of circuit structures that convert sensory inputs into muscular outputs. It looks like it may be logically impossible to convert noncognitive knowledge (flying a copter; riding a bike; seeing red; smelling cinnamon) into cognitive (propositional) knowledge. Yes, we can convert it in the sense of building a complete description, leaving no information out about how to physically realize the knowledge (so no ghosts or magic or gods or souls is needed to make it work). But a description of a heart, no matter how complete, will not pump blood. You have to actually build the thing. And run it. So, too, perhaps, qualitative knowledge. Knowing what a color looks like, requires building the circuit, plugging it into your cognition unit, and running it. A complete description of that circuit can no more tell you what the experience of it will be like, than a description of a heart will pump blood. But who knows. When we are able to tell at a glance the difference between a smell circuit and a color circuit, who knows what else we’ll be able to infer.

The other possibility, though, does mean there is some knowledge that can never be described in any language—that it is impossible to do so. Language is therefore limited. But that is not evidence against physicalism. That language can’t pump blood, is not proof hearts have magical powers. Hearts are still nothing other than physics, particles and fields, all the way down. The same follows for qualia.

It may be that all language can ever do is communicate a reference to something already available to the recipient: you and I can agree we will mean by some set of words x, some experience we share (as in, something we each experience separately, but agree is alike); and that’s simply all language ever does. Which is why you can never describe any experience to someone that they have never themselves had (hence the entire epistemology I lay out in Sense and Goodness). Unless that experience is composed of experiences they have had, that you can then refer to, by having them assemble it in their imagination out of their own component experiences. For example, everyone has felt pain, and what it’s like to increase pain, and that different kinds of pain feel differently, and so on; therefore any pain can be “described” to someone at least in some limited sense, even pains they have not themselves yet experienced, though always some of the information is necessarily going to be lost.

This is why language can never help someone with qualia-inverted vision discover that what they think is red, is actually what you think is green. All language can do is reference what we’ve agreed is a like experience; fire trucks and stop signs are “red” simply means “stop signs are the same color as fire trucks.” The quale we each use to determine that, is not communicable. It’s only configurable. We can build a heart that will pump your blood. And we can wire your brain so you can see what we see. But that’s the only way to transmit the information to you of what it is like to be a brain experiencing that. Language just doesn’t operate that way. And even if it did, it only would, by actually rewiring your brain in the requisite way. This in no way contradicts the conclusion that all that’s going on is physics. Any more for experience (the function of a mind), than for pumping blood (the function of a heart). But maybe we can do more, and someday articulate why red looks red.

Why Bartley’s Critique Flies off the Rails

With all that understood, you can understand what’s going wrong with Bartley’s article in Philosophy Now.

I won’t bother with his completely inaccurate description of “eliminative materialism” (whose conclusions he gets totally wrong). Instead I’ll cut right to Bartley’s key mistake: he declares that “experiences must be defined as not being brain activity” because “experience content is only specifiable through properties that are distinctly different from brains and brain activity.” “Indeed,” he says, “if the mind were not distinctly different from the brain, we could never have come up with the distinct concept of ‘mind’.” Here Bartley makes the common error of confusing an object with a process, form with function. It’s a category fallacy. A mind is not a brain; a mind is what a brain does. He is acting like someone who pulled open his computer and, not finding chess pieces inside it, declaring on that basis that it makes no sense to say his computer can beat him at chess. Or like someone who says that because his drive to Ohio is obviously not identical with his car, that therefore magic, and not his car, drove him to Ohio. That’s just silly.

“Can it mean anything meaningful to say that the contents of democracies are physical?” Yep. And yet it’s just atoms moving around. “Can it mean anything meaningful to say that the contents of conversations are physical?” Yep. And yet it’s just waves of sound or light transferring information from one computer to another. When you account for the structure of the process, yes. It’s just physics all the way down. And yet conversations and democracies exist and are fully explained. So, too, will thoughts and experiences be. “But what does a democracy weigh?” is simply a category error. Democracy is not an object. It’s a process. Likewise, a mind is not an object. It’s a process. Bartley almost seems to understand that when he lists “physical processes” as an example of what a “physical thing” is; but it seems like he doesn’t know the difference. He writes “physical thing” and thinks “object.” Ooops. No, Mr. Bartley. Wrong category of “thing” there.

Bartley probably should have read the first paragraph on this in the Stanford Encyclopedia of Philosophy:

Idiomatically we do use ‘She has a good mind’ and ‘She has a good brain’ interchangeably but we would hardly say ‘Her mind weighs fifty ounces’. Here I take identifying mind and brain as being a matter of identifying processes and perhaps states of the mind and brain. Consider an experience of pain, or of seeing something, or of having a mental image. The identity theory of mind is to the effect that these experiences just are brain processes

Brain processes. Not the brain. In discussing this, someone said to me, but surely, “‘mind’ is typically synonymous with ‘brain’ for the physicalist” and “‘mind’ … is not a verb.” Neither is “my tour of Ohio” or “the Presidential election” verbs. But they are also not physical objects. They are processes. Actions brought about by, and properties of, complex systems of objects. But not identical to the objects themselves. My car can drive me to Ohio, with nothing required but physics; but my car is not therefore “my drive to Ohio.”

Bartley says “you do not conceive your experience of the sounds you hear as being the same sort of thing as…the activity of brain cells responsible for generating the sound experience.” But that’s exactly what we conceive it as. Imagine saying “your drive to Ohio can’t be physical, because you do not conceive of it as being the same sort of thing as rotating gears and pounding explosions inside a metal box.” That would be a dumb argument. And also obviously false. Of course my drive to Ohio is in fact identical with rotating gears and pounding explosions inside a metal box. But for that, there would be no drive. The other particulars (like the directions in which that metal box rolled me, hence “to Ohio”) completes the equation, but are just more physical facts.

What Bartley wants to say is that experiences and neurons are “distinctly different properties” of existence. Which is true. The warmth of a stop sign is a different property than its shape or color or what’s written on it. That it is a mostly red octagonal from one point of view, and nothing but a thin white line from a completely different point of view (when seen on edge), does not argue that it can’t be the same thing. Information processing in your computer can be described as just electrons moving around some wires. Or it can be described as an elaborate video game in which you are driving to Ohio in a silver corvette. Same exact thing. It’s all just a matter of from which angle, which perspective, you are looking at it. Yet it’s all just physics, all the way down. There is no godly voodoo magic that materializes your silver corvette or that moves it around a map. It’s really just those electrons and wires. Experiences are what a brain process looks like from inside the process; just as a white line is what a stop sign looks like from the side; and a silver corvette is what that electron process looks like on the display screen. That in no way means stop signs aren’t octagonal, or that video games or experiences aren’t physical processes.

Likewise when Bartley says “experiences are not properties of brains in the same sort of way that the physical properties of brains are properties of brains” he’s just begging the question. Yes, experiences don’t weigh anything or have a length and width, just as democracies and video games don’t weigh anything or have a length and width (yet are clearly physical things). But by that same reasoning, weight does not have a length or width, either; so “weight is not a property of brains in the same sort of way that the physical properties of brains are properties of brains.” But, you’d say, weight is a physical property! Well, yeah. So is thought. Oh I see what you did there. That’s what a circular argument looks like! All properties are different from other properties. That’s why we call them different properties. That doesn’t tell us anything about whether they are physical or not.

Hence when Bartley says “Mind is not just another part of the brain,” he is slipping into that same mistake again: thinking a process is an object. Mind is not “part” of a brain. It’s the operation of a brain. It’s a different kind of property than weight or length because it’s a process. But we well know processes can be physical. So that being the case, is no argument here. “The substance of experience is experience” is a nonsense statement. That’s like saying “the substance of the video game is the video game, therefore video games are magical nonphysical beings.” Democracy is not a “substance.” Neither are video games…or minds. Yet democracies and video games are clearly physical systems, realized in physical media. So why can’t minds be?

To ask what form of matter “qualia” are made out of is as nonsensical as asking what form of matter “the video game” or “American democracy” or “my drive to Ohio” are made of. These are not things. They are made of stuff…the drive to Ohio is made of tarmac and metal machinery and kinetic energy…the video game is made of electrons, wires, and transistors…democracy is made of buildings, and books, and people. But there isn’t any sense in which these things are those objects. They are what those objects are doing. That’s why democracy doesn’t have a “weight.” What would you weigh? The people? Their property? The buildings? The books its encoded in? Even the video game has no intelligible “weight” because which transistors and electrons it consists of changes from one moment to the next, and in any event the game is not simply the sum of those parts, but their arrangement. And arrangements don’t have a weight. Nor do actions and events.

How It Actually Works

The mind is to the brain, as the output of a software program is to the microchip it runs on. Note I said the output of the program; not the program by itself. The microchip is not the program. But even the program is not the output of the program. My word processing software is not the novel I wrote with it. These are different things. Mind (experience) is the output. Not the program. Nor even the hardware the program is running on. But the program and hardware are entirely physical and are all that is needed to generate the output, which is “the experience.” You need them to get that. And you need nothing else to get that. But that is not identity. It’s causality.

For example, we now know we are not conscious of spans of time smaller than about a twentieth of a second. Which is why movies work: we don’t see the individual cells flicker by, one after the other, because they fly past at 24 frames per second, so we only perceive a continuous moving picture. That means if you “zoom in” to a thirtieth of a second, during that whole span of time, consciousness doesn’t exist. It only exists as an event extended over time—a time span longer than 33 milliseconds. A thing that doesn’t even exist except over a span of time? That’s a process. No process, no thought. No thought, no mind.

You can have storage of a mind…when you are unconscious, the information stays stored there in the brain, but you aren’t conscious. So your mind isn’t doing anything. It’s turned off. Indeed, to pull off that trick, you need long term memory storage (one of the many things our brains do for us). But long term memory can’t even be formed to be stored, without first existing in short term memory…but short term memory is a process, not a storage system. That’s why if you take enough of a drug (like alcohol) that interferes with the ability of your brain to store a memory, you can still operate in short term memory but none of it gets recorded. Short term memory (hence experience, hence qualia, hence everything Bartley is saying a mind is) is a process, something the brain is doing, not something the brain is; it’s not a stored physical structure in the brain. Hence mind as experience is a process, not an object. Just as your car is not your drive to Ohio.

The same goes all the way up the chain of abstraction. Social constructions, for example (like what words mean, what things to assume, what standards are applied) are analogous to the “operating system” on your computer. That can be actually present in a culture, or just potentially waiting to be, e.g. as when encoded in a book, in which case it’s atomically there in the patterns of ink on paper, for example; but then the meaning of the patterns has to be socially extant somewhere or else it’s a dead and indecipherable language like Linear A. But when actually present in a culture, the social construct exists atomically as arrangements of interconnected neurons in brains, in the same way iOS exists in multiple iPhones—only there, instead of neurons, it’s electrons and transistor gates. We call this a social construct when the same pattern is shared across brains comprising a given culture. And indeed that’s how we define and distinguish one culture from another. Otherwise it’s an individual construct—or a group construct, though that starts to look like a sub-culture (and indeed, when we call it a full-blown culture is kind of arbitrary, or pragmatically determined, like how we decide to name a hill a mountain).

Though of course it’s messier for humans than for iPhones, because cultures overlap, or even nest within other cultures, and cultures continually change and evolve, and represent in a society along a bell curve of intensities across individuals, the same way genes do. And so on. But otherwise the analogy holds. The pattern of neurons in a brain entails an activation sequence, a circuit. Every time a certain idea is thought about, the same or sufficiently similar outputs are generated in every brain that thinks about it. The output will be further ideas or even behaviors (and indeed, thinking is just a category of behavior). Just like pattern recognition software, and decision software. It can all be described in terms of nothing more than a physical causal chain of events—just like in a computer, or a system of computers (such as “the internet”). All without ever mentioning anything more abstract. We create the abstraction, only to make thought and communication more efficient.

Hence “social construct” is a useful code for a massive complex of stuff. But it’s really just a massive complex of stuff. All physical. And we can know this because of two converging reasons: we observe that if we remove the physical components, then the social construct vanishes; and: we observe that nothing needs to be added to the physical system, to explain the social system that results. No extra “stuff” has to be added to neural circuits, to get a neural circuit to cause certain outputs to arise from certain inputs, or to get a neural circuit in one brain to match a neural circuit in another brain in respect to its input-output profile. It’s the same reason we don’t include “gremlins” among the causes of airplane crashes. There is no evidence that we have any need of that hypothesis to explain any crashes. Likewise mind-brain physicalism, even when networked into a social system of interacting physical brains. Social constructs are just what happens when you add more brains. Nothing more is needed to explain that, than the adding of more brains.

So, too, each individual brain. Which is just a system of smaller brains (neurons and neural circuits), producing individual constructs, which together comprise a mind. Bartley wants there to be something else going on. But we don’t have any evidence anything else is. Nor any need of that hypothesis. He is sure that maybe brains physically cause minds to exist, but that in so doing they are creating a whole new ontological thing, called “mind” or “qualia.” Maybe. But why think that? It’s not needed to explain anything. No additional energy need be devoted to creating any new object. And therefore no additional substance is needed to realize any new object. Qualia are not objects. Nor are minds. They are events. And as such it is a category error to think they need to be “made of” anything at all, other than what produces them: a churn of meat, chemicals, and electricity.

Bartley says “to say ‘experiences are physical’ would be to say that these particular so-called ‘physical’ things exist entirely to minds!” And he’s right. Experiences are unique to one particular arrangement and activity of matter.  Arrangement isn’t enough (an unconscious mind experiences nothing). You also need the activation of it, the process of it. But not every process generates “experiences.” Experiences are an output unique to only one kind of physical process: a mental process. Just as “jogging across the street” is unique to the existence and motion of legs. Outside poetic metaphor, your salary doesn’t jog across the street, nor does your car, or your coffee. Is jogging therefore a supernatural phenomenon that requires some new magical substance to exist? Obviously not. Neither does your mind. Only certain arrangements, produce certain outcomes. That is an obvious fact of physics. It’s not evidence against physics!

And…Please Know Your Science

Science is philosophy with better data. So philosophers had better know the science of what they are talking about. But Bartley betrays his ignorance of modern science with a bunch of silly statements throughout his screed. I’ll just give three examples to illustrate what I mean:

  • (1) Bartley says “whitewashing the mind/brain distinction could eliminate the difference for practitioners between whether a psychological problem is physically-originated due to a brain dysfunction or brain damage, or mentally-sourced due to traumatic experience.”

No such confusion follows from physicalism. Every therapist already knows that a traumatic experience can only be producing a psychological problem by being physically encoded in the brain; and that the only fix, is something that bypasses or rearranges that physical circuit, so as to ensure a different output from the input. Talk therapy can do that. But only by physically changing the brain. We all know there is a difference between, for example, genetic or surgical causes of brain organization, and experiential and environmental causes of brain organization. But both are physical causes. Both produce physical rearrangements of the brain. Both respond to the same kinds of therapies. Knowing the distinct cause can be helpful to tailoring treatment, but that in no way requires knowing when the cause is “not physical.” Because none are. And this a known fact of science. All changes in a mind, correspond to changes in the brain. All of them. We’ve never observed an exception.

  • (2) Bartley says that because, for example, an actual ball we are tracking rolling behind something else, is different from our mental experience of the ball, that therefore experience can’t be physical. Literally, “These ideas all rely on the idea that physical things exist independent of minds. So by definition, a physical object is not only or purely what is in the contents of experience. This means, conversely, that anything that is purely in a mind, is not physical by definition!”

That’s wild nonsense. Obviously the actual ball outside our mind is a different physical thing than the ball in our mind. Just as a computer simulation of the airspace a plane is flying through is completely different from the actual airspace it’s flying through. Does that mean airplane radar readouts therefore cannot be physical systems? This is incoherent nonsense. There is no sense in which a simulation is “by definition” not a physical system. No more in human minds, than in avionic computers.

  • (3) Bartley says “we know…that some events at a subatomic level are affected by whether there is an observing mind.”

No. That’s not what we’ve discovered. All we have observed is that when you meddle with an experiment—and any observation requires doing that, e.g. sticking a probe into it, bouncing a particle off it—you affect its outcome. That’s true even if minds didn’t exist. It’s not like unseen stars aren’t quantum mechanically burning when we aren’t looking at them. Or that we magically created the entire past history of the universe the first moment we looked up at the sky.

These are some pretty big fails in science literacy. And anyone who is this ignorant of basic science, can’t have any credible opinion in an advanced subject like mind-brain physicalism. But this does explain a lot about why Bartley goes so far off the rails and gets all of it wrong.

Conclusion

Bartley is right to ask “Why do brains in particular have these mental properties?” But we already know the general answer to that question, from comparative neurology and psychology across the animal kingdom and in modern electronics and brain science: these are the properties of information processing; therefore only information processors can generate them; and, we observe, only information processors of enormous complexity and particular organization. Organize them differently, and you get a different output. The internet is complex enough to generate consciousness, but is not at all organized in the way required to do that. If we knew what the required organization was, we could make the internet conscious. But not knowing what arrangement to put the system in to get that output, is not evidence of there being no such arrangement it can be put in.

I’m inclined to see the most promise in explaining consciousness in something like (but not identical to) Integrated Information Theory (minus all the speculation and woo that its proponents stack atop it; plus it probably needs to be integrated with some form of functionalism—see discussion in Wikipedia and the Internet Encyclopedia of Philosophy). But we won’t really crack the qualia problem until either we have active brain scanning instruments of extraordinary resolution—allowing us to construct complete and accurate computational circuit diagrams of the human brain—or we develop a general AI capable of helping us do that, using its otherwise alien brain construction to get at the problem from a different but more accessible direction. Might there one day be a complete physical theory that explains why one information processing circuit produces an experience of the color red, rather than green, or a smell or sound? Yes. I think that’s likely. We can’t conceive of it yet, because we don’t know anything about the underlying computational physics that’s causing it. And that physics is surely going to be extremely complex. Even a single neuron is mind bogglingly complex, in terms of its computational organization. It’s the end result of literally billions of years of evolution. Which puts it way the hell ahead of us in design capabilities.

So will we someday have a sound physical theory of qualia? As the Magic 8 Ball of history tells us: “Signs point to yes.” The scientifically illiterate fallacies of Christian apologists notwithstanding.

Discover more from Richard Carrier Blogs

Subscribe now to keep reading and get access to the full archive.

Continue reading