Last month I launched my three-part series on analyzing peer-reviewed philosophy papers with my Bayesian Analysis of Faria Costa’s Theory of Group Agency, where I explain my process and how I selected the articles for review. I followed that with my Bayesian Analysis of Shelley Park’s Uncanniness Thesis. Third and last up is “Reviving the Naïve Realist Approach to Memory” by Michael Barkasi and André Sant’Anna. As I noted before, Barkasi is an athlete and bike mechanic with a PhD in philosophy from Rice University and a decent academic publication record (see his website and ORCID page). Sant’Anna too (see his website and ORCID page); with a PhD in Philosophy from the University of Otago, New Zealand, he has been serving in postdoctoral fellowships at universities across the globe.

Background

Naive realism is a view studied in both the science of psychology and philosophy of mind, in general holding that we perceive things objectively, as opposed to our perceptions being subjective constructions. Applying naive realism to memory would mean that our memories are not subjective constructions but objective realities. In this and every other respect the science has prevailed: memory has been conclusively proven to be a subjective construction. In fact, a significantly unreliable one at that. We have to infer objective reality from our brain’s subjective constructions; indeed much of what it uses to represent its “best guess” at that literally doesn’t even exist outside our mind (like, for example, color or solidity, which are the brain’s convenient representational inventions: see What Does It Mean to Call Consciousness an Illusion? and The Bogus Idea of the Bogus Mysteries of Consciousness; for the status of mind-brain physicalism generally, see The Mind Is a Process Not an Object).

The same therefore goes for memory, which uses the same brain architecture as perception, and operates by storing not something like a “video tape” of what happened but more like a set of “instructions” for “re-running” the program to reconstruct the memory; which reconstructions will tend to be even more inaccurate than the original perceptions, and highly prone to alteration and editing over time. This doesn’t mean memory is completely unreliable. It is reliable “enough” for most purposes, but its proneness to error and distortion has to be taken into account when relying on it. And the science backing this is too extensive for any philosopher to challenge. Which means attempting to rehabilitate “naive realism” for memory is a tall order. Not least because Philosophy must not ignore or contradict but build on the findings of the sciences (see Sense and Goodness without God, pp. 49-61; and my lecture on the relationship between science and philosophy in Is Philosophy Stupid?).

The Barkasi-Sant’Anna Thesis

As with the other two articles I’ve analyzed, my summary cannot do full justice to the original paper. So I recommend you read it. But in short, Barkasi and Sant’Anna argue (1) that there are “three reasons why philosophers of memory have felt compelled to outright reject naïve realism,” (2) that “none of those reasons are successful,” and (3) therefore naïve realism “needs to be given serious consideration” again. They argue that naive realism has been rejected for memory because: “the intentional objects of memory do not co-exist with memory experiences” so naive realism is de facto impossible; “the appeal to memory traces to account for the functioning of memory is incompatible” with naive realism as well; and naive realism “does not make room for the fact that memory is a fundamentally constructive capacity.” They aver they can prove all three reasons inadequate to rule out naive realism for memory.

So Barkasi and Sant’Anna’s entire argument hinges on whether (a) these actually are exhaustive of the reasons to reject naive realism, (b) they have correctly described these three arguments against it, and (c) they have presented actual rebuttals adequate to dismiss those arguments. In Bayesian terms, those three arguments pose strong likelihood ratios against naive realism as an explanation of the evidence (we simply don’t expect those three things—they are the opposite of what we expect—on naive realism). So Barkasi and Sant’Anna must show either that some evidence has been left out that shifts those likelihoods at least toward balance with, if not in favor of, naive realism, or that the probability of the evidence usually cited has been miscalculated (i.e. it is all more likely, more expected, on naive realism than has been claimed), or both. And they must do this without appealing to any entities or phenomena that have a low prior probability on existing background knowledge of how the world works.

One could correctly say naive realism has a low prior because it has failed everywhere else in theory of mind (it is therefore not expected to turn out correct in memory studies either). But that “prior” is the outcome of those previously averred likelihood ratios against it. So for the purposes of analyzing their argument, I am moving that result back into the likelihood ratios. Because these authors are averring that those were, somehow, miscalculated, and therefore the resulting “updated prior” is inaccurate. So we need to look at whether they succeed in that critique or not. This leaves the prior probability at even (50/50) so long as they do not appeal to any unevidenced (or worse, counter-evidenced) epicycles to force their reevaluation of the likelihoods (by, for example, proposing supernatural entities like souls, or brain structures or operations there is no evidence for). So these are the things to look for as we assess.

The General Barkasi-Sant’Anna Argument

Barkasi and Sant’Anna do not claim to be proving naive memory realism true; their only goal is to establish that it is plausible enough to enter consideration again. They have two main approaches to this end: one is to argue that the three usual reasons given for rejecting it were unsound, because naive realism does not require rejecting those three facts about memory but can be compatible with them (in particular, they claim to be able to retool the definition of it to remain compatible with those facts); another is to argue that naive realism better explains certain other facts (which is a straightforward Bayesian argument: that there is evidence more probable on naive realism than on any competing explanation, producing a likelihood ratio favoring it).

Taking the second approach first: they claim that naive realism “allows us to make sense” of recent popular suggestions “that remembering is a form of ‘re-living’ or ‘re-experiencing’ past events,” and “provides an account of how memory allows us to gain knowledge of past events and to entertain thoughts about” them, and also “allows us to distinguish between memory and imagination in a neat way.” At first glance this does not appear to be a supportable claim. The standard neurophysics of memory already fully accounts for all three of these things, so there is no likelihood ratio favoring naive realism here (while the usual three arguments against it push that ratio the other way around).

  • We can only distinguish between memory and imagination because of the physical memory trace in the structure of our brain’s neurons, such that when our brain fails to register that trace, it can indeed mistake imagination for memory (as in dementia), and even start rewriting memories with imagined (false or altered) memories. Brains are built to physically keep track of which traces are memories and which creative imaginings—the same way we keep track of memories of things that actually happened and things we merely dreamed happening (a process that can also misfire, causing dreams to be mistaken for real events in our past).
  • The reason we can store information from experiences and gain more knowledge from revisiting them is that data and knowledge are not just conceptually distinct, but physically distinct. Knowledge is the recollectable output of computations (information processing) on data. Memories begin as data (just coding for what perceptual experiences to construct when recalled). But all sorts of processing can be done on that data, which takes time and focus, and thus won’t instantly occur upon perception. This is in fact one thing that memory is for: so we can recall data later on and run computations on it to extract more information from it.
  • And the reason memory can correctly be described as “re-experiencing” the past is that that is literally, physically, what happens in the brain: memories are run on the same circuitry as the original perception. A recollected memory literally is an experience. It differs from direct perception in only one principal respect: it is caused by internal sources of data rather than external (a fact that the brain physically keeps track of, hence how it can tell the difference between those two kinds of experience). It can differ in one other respect: intensity. Fewer computational resources are used to “run a memory” than the original perception that caused it (which is why memories are not typically experienced vividly; and in rare cases where they are, the signal has been physically amped).

Since mind-brain physicalism and the constructive model of memory—which hold the mainstream consensus—fully accounts for all three of the phenomena Barkasi and Sant’Anna claim naive realism would account for, these three things cannot be evidence for naive realism. Even if naive realism also fully accounted for them, you still can’t get a probability higher than 100%, so there is no way its likelihood on these facts can exceed that of the standard model. These facts therefore can never increase the probability of naive realism. To the contrary, at first glance, naive realism does not predict those other facts of neuroscience undergirding these three facts—all the things I just pointed out are actually unexpected findings on naive realism. We only expect complex reconstructive, information-labeling-and-processing mechanisms to be producing these facts about memory if naive realism is false. The same analysis follows for every other argument naive realists make, e.g. in their footnotes 9 and 10, every supposed fact arguing for naive realism is actually already fully explained without it, and with documented neurological and other scientific facts that are not expected if it were true. At every turn, the evidence thus argues against it, not for it. Its plausibility cannot be rescued.

Which brings us full circle back to the original three reasons given against naive realism. Barkasi and Sant’Anna first set aside a separate argument one might make, that naive realism can’t account for false perception (like hallucination) or erroneous memory (any false or altered or mistaken recollections), by claiming that there might be ways naive realism could account for this, and then promising to address that argument in future work. In the present work they only offer ad hoc hypotheses, possible ways, naive realism could explain this data, and the most they claim for this is modal success: that this is possible, and at least probable enough to keep naive realism a viable hypothesis (this bracketing tactic comes up at scattered points, e.g. pp. 4, 10, 14, 22-23, etc.). Then they set aside a whole category of memory, “semantic or propositional” memory, as not the subject of their thesis; they are only defending naive realism for “episodic” or “recollective” memory, what one might call “narrative” memory (anything that can be called “re-living” the event; recalled experiences). Then they survey naive realist theories of perception (which their theory of memory requires also be true, even though it has also been universally falsified across all pertinent sciences). Then they try to rehabilitate naive memory realism by attempting to argue that it makes the three reasons we doubt it (non-presentism, memory tracing, and constructivism) probable again and thus no longer evidence against it.

But to do that, Barkasi and Sant’Anna lean on pseudoscience and logical confusions. And that is conclusively fatal to their argument. Honestly, this should never have passed peer review.

Pseudoscience in the Barkasi-Sant’Anna Argument

Barkasi and Sant’Anna specify that by naive realism they do indeed mean direct, not indirect realism: so they are claiming “we are immediately aware of objects residing in the external world, such that we have direct or unmediated access to them,” whereas on indirect realism, “what we are immediately aware of in perception and memory are representations or ideas, which only indirectly make us aware of the external world.” Physicalism is usually a form of indirect realism: we don’t directly perceive anything; we only perceive models of what exists externally to the mind. But, as what is being modeled usually does exist in some sense, this still constitutes ontological realism (as opposed to some form of “idealism” where nothing actually exists outside our minds or whatever “mental world” our minds participate in).

Barkasi and Sant’Anna give the example of seeing, tasting, and smelling oatmeal: on their view, we directly perceive the properties of the oatmeal outside our minds. This of course is pseudoscience. Colors and solidity don’t exist outside our mind; they are fictions invented by our brains to represent facts about the external world (like the presence and shape of electromagnetic force-fields and what frequencies of photon are being radiated or reflected by them). Oatmeal does not have a color. It also isn’t solid. It is mostly empty space (or at least a volume no more solid than a mote of sunlight). The properties we perceive are made-up; they exist only in our mind. They are still useful, because of a practical correspondence between them and real properties the oatmeal does have (like contours of electromagnetic attraction and repulsion, which for convenience we call “solidity,” and electron valences, which for convenience we call “color”).

After all, colors do mostly correspond to certain wavelengths of photon agitating cone cells in our eyes; and we can’t push our hand through a wall as easily as we can the air. But we can’t “see through” a wall not because it is “solid” but because the electron fields in it are deflecting rather than transmitting photons (yet don’t deflect, say, neutrons, which will sail through the wall almost like nothing was there, because almost nothing is); and we can see through a window, but still can’t walk through it like we do the air because of the intense electromagnetic forces experienced in the electron bonds across the atoms of the glass (which, again, won’t stop a neutron; but also don’t stop photons of certain frequencies, hence the window’s transparency to us). The Pauli principle will stop a neutron—so the protons in the atoms of a wall or window will deflect a neutron—making the Pauli principle the closest thing we have to “real” solidity; but protons are so small we can’t see or feel them. That is not what we are “seeing” or “feeling” when we see or feel objects as solid. Likewise, a 430 terahertz oscillating photon will cause certain cone cells in our eyes to initiate an electrical current in nerve cells that enter the brain, which our brain will represent as “the color red.” But at no point is anything red. A 430 teraherz photon is not red. It has no color at all. An electrical discharge in the optic nerve is not red. It has no color at all. If we crack open your brain while it is perceiving that color, we won’t see anything red in there either. Red is a fiction. Nothing is red. If our oatmeal is dyed red, “being red” is not really a property of the oatmeal. It is a property of our brain’s constructed model of the oatmeal. That’s a scientific fact.

Since we have proved the brain is inventing these representations, direct realism is scientifically false. And philosophy can’t trump science. Because science is philosophy—with better data. And a conclusion reached on worse data cannot overtake a conclusion reached on better data. You can’t beat scientific facts with wishes and dreams. You can’t defeat a strong argument with a weak argument. Science is simply that branch of philosophy consisting of the strongest arguments we have to date. So there is no way to get “naive realism” back. It’s been resoundingly refuted by vast and multiply corroborating lines of evidence. It’s done for. And not just in such obvious respects as oatmeal’s “color” and “solidity.” Oatmeal also has no smell, either. That is also a representational fiction.

Yep. “Smells” are also constructs of the brain, mental fictions, invented to represent other facts about the external world. What is real are molecules suspended in the air, which bump into molecules in our nose, and if they “fit” those receptor molecules geometrically and electrochemically, this causes those receptors to initiate an electrical signal along a nerve pathway to the brain. The olfactory molecules have no smell. That is not a property of them. The electrons moving up the nerve have no smell. They are indistinguishable from each other. The only discrimination is made in the brain itself, in the olfactory cortex. If we rewired the nerves from the “oatmeal” receptors (which may be singular or multiple, depending on whether the smell of oatmeal is singular or composite) to the system in the olfactory cortex that processes the smell of urine (likewise), and also rewired the nerves from the “urine” receptors to the “oatmeal” processors, then urine would smell like oatmeal to us, and oatmeal like urine.

Thus, the property of the smell is produced in the olfactory cortex. It exists nowhere prior. It does not exist in the nerves connected to the receptors in the nose. It does not exist in those receptors. And it does not exist in the particles being “smelled.” Oatmeal does not have a smell. It emits molecules. Those molecules have a tendency to trigger certain neurons. Our brains then “make up” something that that is like, in order to distinguish it from other molecules that could be detected instead. There is something real here (the detected molecules; the molecules doing the detecting; the neural systems processing the signal into an experience of a smell). But “the smell of oatmeal” isn’t it (beyond the experience itself being real—but that’s a property of the experience, not the oatmeal). Consequently, naive realism is physically impossible. Barkasi and Sant’Anna are simply doomed here. They are like two jokers insisting the moon is made of cheese because (handwave handwave handwave) something about definitions. But you can’t change what something is by changing what you call it. You can’t “philosophy” your way to “the moon is made of cheese.” Science already refuted you. It’s time to pick up your marbles and go home. Philosophy that simply contradicts well-established science should never be published. Period.

It is still true, of course, that colloquially we can speak of oatmeal having the properties of smell and color. That’s simply convenient syntax. But this should not fool anyone (least of all PhDs) into mistaking that syntax for ontology. When we say “this oatmeal indeed smells like oatmeal” or “this oatmeal is a light shade of brown” we are usually referencing the entire physical system implicated in our experience (not just the oatmeal, but the photons and molecules it emits, the receptors in our nose and eyes, the nerve pathways extending therefrom, the visual and olfactory cortexes, and their generation of an experience). This is simply a convenience of efficient communication. Barkasi and Sant’Anna seem to be confusing this for an ontological reality. Yes, naive realism is a useful shorthand for communication, one experiential system talking to another. But no one should be so foolish as to believe it. That is like arguing that when someone says “don’t put all your eggs in one basket” they are actually referring to eggs. Or when they say the sun rose this morning, they are advocating geocentrism. Or that by calling one day a week Saturday, we are devout worshipers of the god Saturn. This is crap philosophy.

It gets worse. Barkasi and Sant’Anna try to give a metaphysical explanation of optical illusions—the Muller-Lyer illusion is their chosen example—as “a mistaken post-perceptual judgment.” This is scientifically false. The brain constructs the perception directly with that judgment; it is not a “post” perceptual judgment. Our brain literally is incorrectly constructing what we are seeing (science; science; science; science; science; science). This is a well-established fact of neuroscience. These are all pre-perceptual judgments made by our brains in building the perception. So do Barkasi and Sant’Anna cite any science whatsoever? No. They cite two other philosophers making the same science-ignoring, pseudoscientific claims. Crap philosophers citing crap philosophers. This is why crap philosophy should not get published. There should not be any pseudoscience in philosophy journals for these guys to cite. They should be required by peer review to understand and correctly relate the pertinent science.

This is made inescapably clear with color illusions. Consider the Kitaoka Spiral. In no way are you “judging” after perception that there are two shades of green in that spiral. You are simply perceiving two shades of green in that spiral. This therefore cannot be a post-perceptual judgment. Your brain is simply incorrectly constructing what’s on the page or screen (which, in objective reality, is indeed a consistent photon frequency). That is a pre-perceptual judgment: the sensory information reaches the visual cortex clean (no distortion; the cone cells in your eyes performed entirely correctly; the nerve signals then correspond to the physical facts) but in the act of converting those signals into a perception, the optical cortex fucks up, using situational cues to “correct” the colors represented, thus “inventing” two shades of green that didn’t exist in the optical signals it received from the eye.

Perception is thus an invention. It is constructed. And that can only be representational. Given these facts, naive realism is logically impossible. And this is true even before we get to the fact that colors don’t exist anyway—there is no “green” being emitted by your screen, only photons in the 540-600 terahertz frequency range; and there is no “green” being piped up your optical nerves, only the same electrons as pipe up every other optical nerve, this one just goes to a network of neurons in the brain responsible for generating an experience of green. But illusions like the Kitaoka Spiral prove that even correspondences between color experiences and photon sensory inputs are not consistent, but constructed. On both counts naive realism is simply not compatible with the facts. So the only way to rescue it is to deny the facts—facts vastly and variously documented by science. That makes naive realism, by definition, pseudoscience.

Perception is never direct. It is always mediated through cells initiating electrical signals (cones and rods in the eye; hair cells in the ear; and so on), and electrical signals passing along nerves—which are identical: electrons moving up an optical nerve are indistinguishable from electrons moving up an olfactory nerve. Which is why synesthesia exists. Which quite conclusively refutes naive realism all on its own. Actual perception—sensory experience—is further mediated by information processors (one network for colors; another for smells; and so on). That’s three steps removed from the object being detected; and really there are two more steps: when we see a colored object, that isn’t the object we are seeing, but photons deflected or emitted by it; and when we smell our oatmeal, that isn’t the specific oatmeal we are eating or digesting in our nose, but molecules emitted from that oatmeal floating in the air. Conversely, we do not “separately” perceive colors and smells and the like, but typically our brain is generating a perception (an experience) through intercommunication among all those and other centers of information processing, so smells can affect how we perceive colors, and vice versa.

This is all hell and gone from what these authors call the “directness” and “access” requirements of naive realism. It also destroys any possibility of the “relational” requirement, as Barkasi and Sant’Anna define it: naive realism does not exist when the only “relational” attribute of perception is “the casual relation between stimulus and perceptual brain state,” but rather only exists when “your perception of sensory stimuli and their properties just is, at the level of ontological category, a way of relating to them.” This is of course impossible on well-established scientific facts. Your perception is relational to the information processors producing it in the brain, which are relational to the electrical signals entering the brain along nerve pathways, which are relational to the atomic excitation of the sensory cells generating those signals, which are relational to the atoms or photons that are colliding with those sensory cells, which are relational to the object being “seen” or “smelled” (or “heard” or “tasted” or “felt”). There is simply no possible way for there to be a “direct,” unmediated relation between a perceptual experience and the object perceived. There are no direct relations between perception and object. That’s as impossible as the moon being made of cheese.

Barkasi and Sant’Anna’s Pseudoscience of Memory

It gets worse. The basic Barkasi-Sant’Anna thesis is that “according to naïve realism about memory, past-perceived events themselves intrude into consciousness” when we remember them. This is, of course, bollocks. In objective reality, past events are hell-and-gone, located in a time-space coordinate literally no longer reachable to us by any physical means. So those events literally, physically cannot be what is “intruding into consciousness” when we remember them. Neuroscience has vastly established that what is “intruding into consciousness” is a reconstructed model, using the same machinery as we used for the original perception, but with completely new inputs: inputs that come not from the objects or events originally perceived, but from stored information about “what” we experienced, so as to “recreate” it. Memory is thus more like a stage play: it is not an exact reproduction of the original play, but a reenactment; but it’s even more subjective and unreliable than that. Because stage plays at least have scripts and notes for actors and directors to go by, whereas our memory is even less precise, and more corruptible. Hence memory fades and changes over time.

Hence Barkasi and Sant’Anna’s claim that “rather than being remembered through some intermediary, memory is a direct experience of those events” is simply false. Memory is only ever remembered through an intermediary: stored data about how to recreate an experience, which data are imperfect—it already starts out incomplete, it suffers loss over time, it suffers additions and alterations over time, and it is colored by changes in the perceptual apparatus used that resulted from lived experience in the interim, and so on. Moreover, there is no strict one-to-one data record. A digital video record is closer to naive realism than human memory, because a video file (of suitable quality) records every single piece of data originally viewed on a live video feed, and can recreate it exactly on precisely the same hardware. And yet even that is not “naive realism,” as the record is not the event. The digital “ones and zeroes” on a flash drive are not “Brad Pitt eating a sandwich that one time.” So we still are looking at an intermediary.

And yet human memory is vastly worse than that. It does not record every single detail, but more like “rough notes,” instructions for rebuilding what was seen, and it does not preserve them accurately over time, and it can never reproduce them on “exactly the same machinery,” because the human perceptual system is constantly changing—neurons are plastic, and are always strengthening, weakening, and adding or even subtracting, synaptic connections throughout the perceptual system over time. The brain you run a record on next week will not ever be the same brain that experienced those events originally. And the information your brain uses to reconstruct that experience will not ever be exactly identical to what was originally experienced—even as a copy of it. Our brains make no such copies. We are not digital recorders. Such memories don’t exist. And yet, again, even an exact copy would not be identical to the original. My Amazon Prime recording of Ocean’s Eleven is not identical to the day its scenes were filmed. It is not a “naive realist” memory of those days; ergo human memory, even less so.

So when Barkasi and Sant’Anna admit “naïve realism says that we can only remember events with which we actually interacted through our memory systems” such that “had the event never happened, we could not now remember it,” they are soundly refuted by all the science of memory. Because our memories often change over time, such that some elements of any memory are not what we experienced. The experience in memory will be lacking (data loss), it will be distorted (as the way we construct perceptions from data changes), it will be altered (details will change, details will get added). This is how human memory has been extensively demonstrated to work. And by their own admission the probability of that being the case if their naive realism were true is as near to zero as makes no odds. Their theory is dead as doornails. And this massive error has resulted from their lack of studying, learning, or interacting with hardly any memory science at all. The result is garbage.

There are methodological fails besides this. For instance, Barkasi and Sant’Anna use the adage “the simplest hypothesis is probably the best,” but there is a reason Aristotle’s “simplest” theory of four elements has been replaced by a complex Periodic Table of one hundred and eighteen elements: there is so much data that Aristotle’s theory does not explain. In fact, that “complex Periodic Table of one hundred and eighteen elements” is the simplest theory now—because it explains all the evidence with no unnecessary additions (no sorcery, no sentient atoms, no angels moving molecules around). Barkasi and Sant’Anna have thus fucked up Ockham’s Razor, which does not say “the simplest explanation is the best” (that is almost always false) but that theoretical “entities must not be multiplied beyond necessity.” In other words, if you don’t need it to explain the data, it’s probably false. But also, if it doesn’t explain the data, it’s probably false. It is only the simplest explanation of all the data that is the one most likely to be true. And Barkasi and Sant’Anna’s explanation explains almost none of the data (it is even contradicted by most of it). That does not make it the simplest theory of the data; it makes it simply false.

Another methodological error is when they speak often of how a memory “introspectively strikes you” as if somehow that then indicates ontological reality. This is false; indeed, illogical. Obviously that it “seems” like “we are there” when we vividly remember a past event in no way even implies we are actually there when we are remembering it. There is no way to get from “what it seems like” (e.g. “that ball seems red”) to what it actually is. There is no ball. It burned up in a fire years ago. I therefore cannot be interacting with it again. And all I have is an indirect data-record of photon collisions on my eyes once upon a time; provided that that record has not become distorted or confused or confabulated, but even then none of those photons or collisions was “red” even at the time, much less now. So there is no way to get from “seems” to “is” here other than by rejecting naive realism and instead adopting the scientifically correct understanding of how, for example, photons and neurons work, so we can model a correct ontology of how we are actually remembering the ball through a complex system of intermediary steps, and never in any way directly. Naive realism can’t get there from here. Indirect realism can. And that’s that.

Their Attempts to Wiggle Out

Nevertheless, first, Barkasi and Sant’Anna try to tackle the obvious problem: the past is gone, so there is no way any memory can be in direct relation to it. As they put it, “if naïve realism is true, memory is, by definition, impossible” on that account. They don’t really have a response to this. They first attempt a semantic argument, that memory connects to a causal system that extends into the past, so memory can “supervene” on that temporally extended causal system. But that is indirect realism, and they already defined naive realism in contradistinction to that—as they must. Anything as indirect as “what constitutes my memory now lies at the end of a long complex causal process extending across and into the past” simply isn’t naive realism in any meaningful sense. That that is what memory is is precisely what refutes naive realism, and leaves us only with indirect realism. So this argument is simply illogical. And illogical assertions have a probability of zero. So we needn’t expend any Bayesian analysis on them.

Their second attempt is a hare-brained argument from the B-Theory of time, whereby because the past does “exist” (it always maintains a spacetime location), surely memory “now” therefore “co-exists” with the past in some sense. But this is also illogical. The block universe theory (what they call eternalism) rules out any possibility of later moments “co-existing” with earlier moments. Those moments are inexorably separated by a measurable quantity of time. To say memory coexists with the past is like saying California co-exists in the same geographical location as New York. Nonsense. They maintain separate spacial locations, just as objects in time maintain separate temporal locations. And this is precisely what rules out naive realism, not the other way around. All the intervening spacetime between our memory and the remembered events is chock full of complex causal chains and distorting filters. Our memories aren’t using tachyon beams to re-scan past space-time coordinates. There is no way we can reach back in time and “directly” touch it again. Memory literally cannot possibly be doing that. So again, what we have is indirect realism, not naive realism. So their second argument is also illogical, and as illogical assertions have a probability of zero, we again needn’t expend any Bayesian analysis on them.

Then they try to tackle the problem that memory is an interaction with a memory trace (some sort of stored data in the brain) and therefore cannot “at the same time” be an interaction with an actual past event. When we remember, the causal loop is “present trigger > stored memory trace > memory experience,” nowhere during which has the brain reached back into the past to interact with it again. No tachyon beams, remember? Here Barkasi and Sant’Anna stumble into the weeds of self-confusion, and despite an overt reference to a peer reviewer trying to point this out to them (in footnote 17), their paper somehow passed anyway (I cannot fathom how), even though they never understood the reviewer’s objection, and present in response to it only a demonstration of their own semantic confusion, to wit:

Barkasi and Sant’Anna conflate memory, as the stored data (e.g. when we speak of the locations of memories in the brain), with memory*, as the experience of remembering (e.g. what it’s like to be the machine reconstructing a past experience using that data). Those are not the same thing. Memory ontologically is the data. It sits in your brain and most of the time you aren’t experiencing it at all. That’s “the memory trace” that has to be “activated” and run like a program in order to experience any memory at all. That then produces memory experientially, which is the output of the memory trace, not identical to it. Since Barkasi and Sant’Anna never correctly distinguish these two things, nothing they have to say about memory traces even makes logical sense. And illogical assertions have a probability of zero, so once again we needn’t expend any Bayesian analysis on them. There is simply no actual response being made here. Just confused word vomit.

They even do this twice. In attempting to build an (incoherent) answer to their peer reviewer, they say “the fact that memory traces contribute causally to memory, but are not constitutive of its intentional objects, does not threaten” their thesis. Of course it does; it establishes indirect relation, which is by definition indirect realism, not naive realism (as I already noted). But they have committed a second semantic mistake here as well: the intentional objects of memory can be past events because intention is reference, not direct causal relation. That my memory is about an actual past event is not the same thing as my recollections being directly caused by that past event. Barkasi and Sant’Anna argue as if these are the same things, which is incoherent nonsense. “Aboutness” (intentionality) is simply a computational label assigned to a piece of data (“the information stored in these neurons relates to something that happened on a Saturday in Spring of 2019”). That label is not being continuously “caused” to be attached by “tachyon beams” from the past. So there is no possible way naive realism can stand on its existence. Yes, memories are about real past events, and were caused by them. But our recollections of those memories are not the past events themselves, nor are they directly caused by them—they are indirectly caused by them. Hence what we have is indirect realism. Barkasi and Sant’Anna never address this. Consequently, there is nothing in their case here to analyze; just pointless wheel-spinning.

Finally, they try to tackle “the constructive character of remembering and the pervasiveness of memory errors,” which are yet more extremely improbable facts on naive realism and thus (actually) decisive evidence against it. Science has conclusively proved that memory is stored as information for reconstructing experiences (rebuilding them), and not as static “video recordings” or the like. Indeed, we don’t even store the raw sensory inputs—at all. We only store the outputs (the perceptions generated by the inputs), and in fact, often not even that: we don’t just file away whole perceptions (again, there is no video recorder in the brain), but something more like sets of instructions for rebuilding those perceptions. This is why it is so easy for memory to become distorted, and even for us to implant false memories in people. This is also why (most of us) need repeated exposure to a raw stimulus to form a reliable memory of it.

Perception already works this way. As Daniel Dennett points out in Consciousness Explained (e.g. pp. 354-55), most of what we think we “see” in any given moment is an invention of our brain, because it is merely “guessing” at what should “fill in the spaces” on the periphery of our vision (this is why the invisible gorilla trick works). Because our brains simply can’t do all that data processing. If, as Dennett says, you walk into a room whose wallpaper consists of thousands of identical pictures of Marlyn Monroe, your brain will simply fill in all the Marlyn Monroes, even if some of them aren’t there (or are actually Gillian Andersons). Many optical illusions illustrate this fact: the brain is inventing most of what you see. It is trying its best; it isn’t just making stuff up willy-nilly. It has evolved tricks, heuristics, and mechanisms to make a “best guess” at what’s really there, to save processor capacity. But it nevertheless is guessing. Only closer to the center of vision does it try harder to get exactly right what’s there; our brains evolved the logical assumption that if we need to devote resources to such a task and only have a few to devote, better there, since that’s where we are choosing to look. Memory does the same thing. It mostly records instructions for what is supposed to be seen or heard and the like, and when we trigger a memory so as to recollect it, those instructions are then used on our perceptual machinery to recreate the experience. And those instructions can be wrong, incomplete, altered, or overwritten. But at no point can this ever be described as a direct access to past events. It is instructions for reproducing a stage play; it is not a video recording of that stage play.

What do Barkasi and Sant’Anna have to say about this? Nothing. They correctly describe the effect (e.g. on p. 22 they describe the pervasiveness of memory distortion), but show no awareness at all of the cause. Despite citing several works in memory science, it appears they did not read any of it; or at least, they fell asleep at every point where any of it describes the neurophysics behind this effect. Accordingly, they don’t show any awareness of the problem they actually have to confront. They think that if naive realists can allow for memory distortion in general, then naive realism can be rescued. Which is conceptually true—if we had a time-viewer as imagined in fiction, where a pane shows the actual past as it occurs on the other side of the window, it could make errors in transmitting information (maybe, it filters colors wrong, or is blurry), but it would still be directly viewing the past. But that isn’t the problem Barkasi and Sant’Anna are supposed to be tackling here. The problem is that time windows don’t exist, not that they’d “still be direct viewers of the past even if unreliable.”

The reason constructive memory, and its error-proneness, refutes naive realism is that how it actually physically works proves the absence of any direct access to the past. Our memories aren’t time-windows; its errors aren’t just “it’s blurry,” but, “we are inventing” whole people and events in it, imaginary wallpaper that didn’t exist, and the like. The way memory errs is evidence that memory is a building process, not a window process. And a building process is impossible on naive realism—it has a probability of zero, because it directly logically contradicts the very core claim of naive realism, that access to the past is direct. And since the probability on present scientific evidence that memory is not a building process is as near to zero as makes no odds, so is the probability of naive realism. This is the problem. And Barkasi and Sant’Anna never respond to it. So, the evidence stands as it is, refuting naive realism. Just as in the other two cases—of how memory traces mediate memory recollections, and how the past is not located anywhere near the present nor do we interact with it when we recollect things (no video recordings; no tachyon beams; no time windows).

The best they come up with is when Barkasi and Sant’Anna say that when you remember things wrong, “your memory did not contain any inaccuracies—it was your judgment that introduced the relevant distortion,” which is simply not describing the scientific facts of memory. They also find this solution weak, though not because it is antiscientific. They don’t even seem to know it is antiscientific! Science has solidly proved no such “memory” as they are talking about exists in the human brain. There is no such thing as a “reliable memory” hidden inside our brain, or that only when we “experience” it do we screw it up (and indeed, not only screw it up, but do so by an exercise “of judgment”). This is all pseudoscientific bullshit. Vast evidence across multiple fields in the science of memory excludes any such entity or operations in the brain. These guys are literally inventing unicorns and claiming they solve the problem that all existing evidence refutes their thesis. Alas, prior probability establishes the non-existence of their proposed brain mechanisms. Their unicorns don’t exist. So their thesis, once given this unicorn epicycle, drops even at the point of prior odds to next to nothing in odds—the eternal consequence of “making shit up” to rescue a theory from otherwise damning evidence.

But even Barkasi and Sant’Anna find this “solution” dubious, and stick instead with something more like the “fuzzy time window” idea. Which I already showed is, as well, antiscientific. No evidence supports the existence of any such mechanism in the brain; all evidence supports entirely different mechanisms doing the work of memory and recollection instead, mechanisms that entail naive realism is false—mechanisms Barkasi and Sant’Anna never mention, seem to know nothing about, and never propose any response to (other than implicitly denying them by proposing demonstrably non-existent brain mechanisms in their place).

And that’s it. That’s all they got. So we end up where we started: all the evidence against naive memory realism remains, and remains as improbable on naive realism as ever, and they adduce no evidence at all that changes either fact. So naive realism simply remains false. All those words they spent were just a hamster in a wheel, getting nowhere.

Bayesian Analysis of the Barkasi-Sant’Anna Thesis

As I explained the first time, Bayesian analysis entails that the probability of a theory being true follows from multiplying two factors: the prior odds and the likelihood ratio. The prior odds measure how well the theory tracks human background knowledge and prior similar cases. The likelihood ratio tracks how expected (or unexpected) the evidence is on that theory, relative to alternative explanations of that same evidence—and this means not just the evidence presented, but the evidence that we have: if important evidence has been left out, it must be put back in before this step is evaluated.

Applied to the Barkasi-Sant’Anna argument, even after allowing a naive prior probability (acting like we have no background yet as to know whether naive realism is more likely true or false, and then “putting all that back in” as evidence affecting the likelihoods), their argument tanks so catastrophically on the likelihood ratio as to turn up simply nuts. Their theory catastrophically fails to make any of the neuroscience or cognitive science or even standard psychology of memory even remotely expected, and thus even remotely probable. Whereas all that knowledge is 100% expected on standard indirect realism. And none of their excuses to try and evade this consequence have any evidential support or logical validity, changing this likelihood ratio not one whit. This paper essentially just ignores vast quantities of well-established, countlessly replicated scientific knowledge, and then declares reality to be exactly the opposite of what all that knowledge proves. This is crap philosophy. Plain and simple.

Series Conclusion

This finding now means my series in whole has demonstrated something more broadly about philosophy as an academic field: in my previous series on randomly selected history papers, every one came out solid, indeed remarkably sound; but here in my series on randomly selected philosophy papers, we got one solid paper, one half-so, and one that was, essentially, pseudoscientific garbage. This demonstrates that philosophy is a substantially less reliable field of knowledge-inquiry than history—and that’s saying something, as history is conceptually at the bottom of the accuracy ladder among all the knowledge sciences (owing to the usually compromised and vague status of its evidence: see History as a Science), although in practice, psychology performs far worse, but only because of institutional decisions that allow poorly designed and embarrassingly low-powered studies to pass peer review, something psychology as a field could stop doing if it chose to (see, for example, Is 90% of All EvoPsych False?).

Philosophy does no better. Its peer review standards are clearly weak. Sound standards would have led to more citations of pertinent science in Faria Costa’s paper, significant revisions in Park’s paper, and a complete rejection of Barkasi and Sant’Anna’s paper. This means you should approach peer-reviewed work in philosophy with considerable skepticism and a critical eye. Certainly it will often be (even if not always) higher quality than non-peer-reviewed philosophy (even Barkasi and Sant’Anna’s paper is well articulated and produces throughout a good bibliography on naive realism; it’s pseudoscience, even garbage, but it isn’t doofy or crank). But, like psychology (and, one might argue, Biblical Studies), it won’t have a good track record; because its peer review standards are middling—not useless, but more poorly performing than would be the case if those standards were to be shored up more, in the way they have been in more reliable sciences (or in even, indeed, most history fields).

So always look at the Prior Odds—is a philosophy paper ignoring or correctly describing, aligning with, and building on pertinent background knowledge in the sciences and other knowledge fields? And always look at the Likelihood Ratio—is the evidence (including evidence left out) probable or improbable on the thesis being proposed, and is it more probable or less probable on the best available alternative explanations of that same evidence? For starters, on both points, see What Is Bayes’ Theorem & How Do You Use It? and Advice on Probabilistic Reasoning.

Discover more from Richard Carrier Blogs

Subscribe now to keep reading and get access to the full archive.

Continue reading