Sunday, December 12, 2010

The mereological fallacy

Stomachs don’t eat lunch. Eating lunch is something that a whole, embodied person does. We understand the role that stomachs play in the lunch-eating process; we appreciate that people can’t eat lunch without them. Brains don’t think. They don’t learn, imagine, solve problems, calculate, dream, remember, hallucinate or perceive. To think that they do is to commit the same fallacy as someone who thought that people can eat lunch because they have little people inside them (stomachs) that eat lunch. This is the mereological fallacy: the fallacy of confusing the part with the whole (or of confusing the function of the part with the telos, or aim, of the whole, as Aristotle, who once again beat us to the crux of the problem, would say).

Nor is the homunculus a useful explanatory device in either case. When I am asked how we might explain the workings of the mind without recourse to mental representations, the reply is that we fail to explain anything at all about the workings of the mind with them. “Remembering my mother’s face is achieved by inspecting a representation of her face in my mind.” This is explanatorily vacuous. And if reference to representations does nothing to explain dreaming, imagining and remembering, it is particularly egregious when mental content is appealed to for an explanation of perception itself, the original “Cartesian” mistake from which all of the other problems derive. A person is constantly developing and revising an idea of his or her world; you can call it a “picture” if you like (a “worldview”), but that is figurative language. A person does not have a picture inside his or her body. Brains don’t form ideas about the world. That’s the kind of thing people do.

This original Cartesian error continues to infest contemporary cognitive science. When the brain areas in the left hemisphere correlated with understanding speech light up and one says, “This is where speech comprehension is occurring,” the mereological fallacy is alive and well. Speech comprehension is not something that occurs inside the body. Persons comprehend speech, and they do it out in the “external” world (the only world there is). Positing representations that exist inside the body is an instance of the mereological fallacy, and it is so necessarily, by virtue of the communicative element that is part of the definition of “representation,” “symbol” etc. Neither any part of the brain nor the brain or nervous system considered as a whole interprets anything. The key to a natural semantic of intentional predicates is the realization that they are predicated of persons, whole embodied beings functioning in relation to a larger environment.

This realization may also be momentous for brain science. Go to the medical school bookstore, find the neurophysiology textbooks and spend a few minutes perusing them. Within the first minutes you will find references to the movement of information (for example by the spinal column), maps (for example on the surface of the cortex), and information processing (for example by the retina and in the visual cortex) and so on. (Actually I suspect that brain scientists are relatively sophisticated in their understanding of the figurative nature of this kind of language compared to workers in other areas of cognitive science; the point is just that representational talk does indeed saturate the professional literature through and through.) But if brain function does not involve representations then we don’t know what brains actually do, and furthermore the representational paradigm is in the way of finding out: the whole project needs to be reconceived. If there is any possibility that this is true at all then these arguments need to be elaborated out as far as they can be.

Taking the argument from the mereological fallacy seriously also draws our attention to the nature of persons. It follows from what has been said that the definition of “person” will be operational. Operational definitions have an inevitably circular character: a person is any being that takes intentional predicates. One might object that we routinely make intentional predications of, say, cars (“My car doesn’t like the cold”), but as Daniel Dennett famously pointed out this objection doesn’t go through when we know that there is a “machine-language” explanation of the object’s behavior: I may not know enough about batteries, starters and so forth to explain my car’s failure to start in the cold, but someone else does, and that’s all I need to know to know that my “intentional” explanation is strictly figurative. But then don’t persons also have machine-language explanations?

No: my car won’t start because the battery is frozen. The mechanic does not commit any fallacy when he says, “Your battery’s the problem.” The part is not confused with the whole. It’s really just the battery. Now suppose that you are driving down the freeway searching for the right exit. You remember that there are some fast-food restaurants there, and you have a feeling that one always thinks that they have gone too far in these situations, so you press on. However you manage to do this, it is no explanation to say that you have done it because your brain remembered the fast-food restaurants, and has beliefs about the phenomenology of being lost on the freeway, and decided to keep going and so forth. That’s like saying that you had lunch because your stomach had lunch.

In fact there is not a machine-language explanation of personhood. Kant, writing in the late 1700s, is fastidious about referring to “all rational beings,” he never says “human beings”; he understands that when we are discussing the property of personhood we are discussing (what I would call) a supervenient functional property (Kant would call personhood “transcendental”), not a contingent physical property. Unfortunately Kant is programmatically intent on limiting the scope of materialism in the first place and thus fails to develop non-reductive materialism. But he understood that the mental cannot be one of the ingredients in the recipe for the mental.


  1. This is a very good description of the problem. I have a question though. What do you think about models like cognitive architectures, which could not logically function if their programmers had imported this fallacy into the code? They are obviously presumptuous about ontology to begin with, which is itself a major concern, but I'm not sure I would attribute the mereological fallacy to them. I'm very interested in your opinion.

  2. That, Mr Brown, is very nicely put. Especially the Stomachs bit. Clever and much appreciated.