Stomachs don’t eat lunch. Eating lunch is something that a whole, embodied person does. We understand the role that stomachs play in the lunch-eating process; we appreciate that people can’t eat lunch without them. Brains don’t think. They don’t learn, imagine, solve problems, calculate, dream, remember, hallucinate or perceive. To think that they do is to commit the same fallacy as someone who thought that people can eat lunch because they have little people inside them (stomachs) that eat lunch. This is the mereological fallacy: the fallacy of confusing the part with the whole (or of confusing the function of the part with the telos, or aim, of the whole, as Aristotle, who as usual beat us to the crux of the problem, would say).
Nor is the homunculus a useful explanatory device in either case. When I am asked how we might explain the workings of the mind without recourse to mental representations (students often ask this), the reply is that we fail to explain anything at all about the workings of the mind with them. “Remembering my mother’s face is achieved by inspecting a representation of her face in my mind.” This is explanatorily vacuous. And if reference to representations does nothing to explain dreaming, imagining and remembering, it is particularly egregious when mental content is appealed to for an explanation of perception itself, the original “Cartesian” mistake from which all of the other problems derive.
A person is constantly developing and revising an idea of his or her world; you can call it a “picture” if you like (a “worldview”), but that is figurative language. A person does not have a picture inside his or her body. Brains don’t form ideas about the world. That’s the kind of thing people do.
This original Cartesian error continues to infest contemporary cognitive science. When the brain areas in the left hemisphere correlated with understanding speech light up and one says, “This is where speech comprehension is occurring,” the mereological fallacy is alive and well. Speech comprehension is not something that occurs inside the body. Persons comprehend speech, and they do it out in the “external” world (the only world there is).
Positing representations that exist inside the body is an instance of the mereological fallacy, and it is so necessarily, by virtue of the communicative element that is part of the definition of “representation,” “symbol” etc. Neither any part of the brain nor the brain or nervous system considered as a whole interprets anything. The key to developing a natural semantic of intentional predicates is to realize that they are predicated of persons, whole embodied beings functioning in relation to a larger environment. Brain/body dualism can be presented as non-dualist (isn’t the brain a physical organ of the body?), but it is an insidiously Cartesian view that gets us no farther in naturalizing intentional predicates.
Suppose that you are driving down the freeway searching for your exit, and you’re worried you might have passed it. You remember that there are some fast-food restaurants at the exit, and you think that one always feels that they have gone too far in these situations, so you press on, keeping an eye out for the restaurants. However you manage to do this, it is no explanation to say that you have done it because your brain remembered the fast-food restaurants, and has beliefs about the phenomenology of being lost on the freeway, and decided to keep going and so forth. That’s like saying that the way you had lunch was that your stomach had lunch.
This realization may also be momentous for brain science. Go to the medical school bookstore, find the neurophysiology textbooks and spend a few minutes perusing them. Within the first minutes you will find references to the “movement of information” (for example by the spinal column), “maps” (for example on the surface of the cortex), “information processing” (for example by the retina and in the visual cortex) and so on. (Actually my impression is that brain scientists are relatively sophisticated in their understanding of the figurative nature of this kind of language compared to workers in other areas of cognitive science; the point is just that representational talk does indeed saturate the professional literature through and through.) But if brain function does not involve representations then we don’t know what brains actually do, and furthermore the representational paradigm is an obstacle to finding out: think of all those experimentalists developing protocols to try to “locate the symbolic architecture.” They might be looking for something that isn’t there. If there is any possibility that this is true these arguments need to be thoroughly explored at the very least.
Taking the argument from the mereological fallacy seriously also draws our attention to the nature of persons. It follows from what has been said that the definition of “person” will be operational. Operational definitions have an inevitably circular character: a person is any being that takes intentional predicates. In fact there is not a “machine-language” explanation of personhood. Kant, writing in the late 1700s, is fastidious about referring to “all rational beings,” he never says “human beings”; he understands that when we are discussing the property of personhood we are discussing (what I would call) a supervenient functional property (Kant would call personhood “transcendental”), not a contingent physical property. However Kant is programmatically intent on limiting the scope of materialism as such and thus fails to develop non-reductive materialism. Instead he imports the mental (“reason”) from the noumenal world and ignores the problem of the relationship between transcendental reason and the human body (this is not to say that he does not acknowledge the role of our particular, contingent sense organs in shaping our representations of the world to the extent that those representations are themselves contingent and particular to us).
With Kant we remain in our bodies but not of them.
Once one recognizes that intentional predicates are predicated of whole persons – once one sees that positing mental representations necessarily commits the mereological fallacy – the question of representation is settled. It is I, and not some “brain state,” that is remembering my mother’s face. However there is a tight network of arguments and assumptions, centered on a model of intentional states as “propositional attitudes,” that will have to be disentangled to the satisfaction of readers who are disposed to defend representations. After that unpacking is done the reader will also reasonably expect some account of a non-representational analysis of intentional predicates, something that is not achieved by simply pointing out the mereological fallacy.
Subscribe to:
Post Comments (Atom)
Well said. The degree to which the mereological fallacy infests cognitive philosophy is, frankly, embarrassing.
ReplyDeleteI'm reminded of one aspect of Julian Jaynes' theories, in his book on the breakdown of the bicameral mind.
ReplyDeleteI don't have the book with me so I'll work from memory, and more than usually subject to correction. But Jaynes said that the "bicameral mind" was not "conscious" in the way in which we are NOT just for reasons of neurology, but mecause certain metaphors hadn't come into use yet. The idea of a sort of theatre "inside" the human body -- chest or head, depending on who is writing -- started as a highly literary metaphor, and but became an essential part of how people saw themselves. THAT was the "breakdown" of Jaynes' title, the metaphor became generally accepted as a literal fact. Thus, "consciousness" pulled itself into existence by imagining itself, if you will.
This seems akin to your point, except for chronology.
Yet implicit in Jaynes account is the point that we don really have a choice. Going back to the bicameral mind is not an option. The notion of an inner space where we deliberate is something more than a useful fiction, it is a constitutive fiction.
Perhaps rather like the fiction that the elite group of white men that gathered in Philadelphia in 1787 had any business speaking for "We, the people" of the United States.
It seems to me the mereological fallacy could be said to be universal if one presumes the universe as a whole to be more significant than the parts.
ReplyDeleteThen it is the universe that thinks and eats lunch---and we
all commit the fallacy.
The issue revolves around the question-- what is the essence of a thing and where does a
thing stop and another thing begin?
To science, apparently, the essence of a person is his atoms
and then his physiology and brain
--and all the rest is reduced to that and presumed to account for it.
Then it seems that science is a systematic proliferator of the mereological fallacy since it insists that the results of its experimental context--one narrow context of this vast world---are the only things that should decide the character of the entire universe.
Science generalizes to universality, its own small morsel of existence---its special point of view. As useful as that point of view can be, there is no experiment performed to confirm its sole validity--rather, it is an assumption.
Equations rapidly burgeon and
become forbiddingly, unsolvably complex when many variables are in play--so science simplifies, simplifies to deal with only a few things amenable to quantification. And then it trumpets this necessarily limited view of the world as the only valid and correct view of the world. That, it seems to me is a kind of super -mereological fallacy.
But of course anyone may define validity such that it confirms one's own presumption.
If two each believe that the other does not grasp the whole of something, the other then commits the fallacy.
Of course, the nominalist thinks that wholes are just notions, versus "real" things --the forest is just a bunch of single trees.
But it seems the nominalist does think that many general terms label real things---and is oddly arbitrary about it.
In my view every term is in fact a general term---for any term is in fact a category of thing into which specific things are placed. The basic structure of language is --A is a B.
And anything may be divided into parts,even space, and the parts further subdivided, and any collection of parts may be linked in some manner into a whole.
It seems to me that such dividing and melding is quite flexible and even arbitrary---and none of it necessity.
I think many people would agree that we must include the whole person in our description of them ---but what is a whole? What is included and what's not? How bounded?
Surely the mereological fallacy would have more punch if such a question could be answered indubitably--but what is the chance of that in the usual course of things?
As it is, the fallacy seems in effect, in essence, the assertion that one's whole is better than another's.
I don't understand.
ReplyDelete"Brains don’t think. They don’t learn, imagine, solve problems, calculate, dream, remember, hallucinate or perceive."
If I understand the idea, it's that people (the whole) presumably do these things - not the part (the brain). But if we imagine removing my legs, my arms, my skin, etc... you'd see little change in my ability to carry out the cognitive tasks above [the content of my cognitions would be affected, since I'd be wondering, What happened to the rest of my body?? but the cognitive functions would be intact]. And damage to relatively small regions of the brain could directly impact these cognitive functions (leaving other biological functions intact). I don't understand why it isn't reasonable to speak of these cognitive functions as properties of the brain rather than the person as a whole. I know that I'm not saying anything particularly novel here. I may not be understanding the essence of the problem.
(By the way, I very much like the nature of this blog).
So I'm late to the party, but this argument doesn't sit well with me either: my brain and my nervous system perceives my reality, my whole person doesn't perceive the reality. The rest of my person enables my brain and nervous system to be able to do this, but hypothetically I would still be able to perceive the world without the rest of my person being attached to my brain and nervous system, if supported through an artificial life support system (i.e. providing a suitable form of energy and system maintenance for my brain and nervous system to continue to live).
ReplyDeleteUnless the brain, or better yet the human organism, is in contact with its environment, it has nothing to think about.
ReplyDeleteVery good and very interesting read.
ReplyDeleteAJ
ajstates.blogspot.com
Good Read Really. Very interesting.
ReplyDeleteAJ
ajstates.blogspot.com
Articulately edifying. Thank you.
ReplyDeleteNot sure why you did not use paragraphs in this blog. It would have made reading a lot easier.
ReplyDeleteThere is another way to do behavioral science - http://www.ijpsy.com/volumen1/num2/23/some-notes-on-theoretical-constructs-types-EN.pdf
ReplyDelete