Gerardo Primero is interested in the question of whether or not intentional predicates applied to brains are meaningful. His idea is that there is a difference between saying something that is meaningful, but wrong (his analysis), and saying that such predications are nonsense (devoid of meaning). His take on Wittgensteinian analyses (such as that of Bennett and Hacker in Philosophical Foundations of Neuroscience) is that they claim that e.g. saying that there are images or sentences or other forms of representation "in the brain" is meaningless. So the question is whether or not Gerardo has a good criticism of Bennett and Hacker in this regard.
Note that this discussion is to a large extent a version of the oldest and biggest problem for behaviorism, that is that we have strong intuitions that phenomenal experience is distinct from outward behavior, as in the case of the man who pretends he is in pain when he is not. Surely this exposes behaviorism as incomplete at best, if the behaviorist claims that "Sally likes chocolate" does not entail a reference to the quality of her gustatory sensations when she puts the chocolate in her mouth? Daniel Dennett is trying to untangle this in The Intentional Stance (unsuccessfully, I think), and David Chalmers in The Conscious Mind takes the alleged possibility of "zombies," operational duplicates of conscious persons who have no conscious experience, as grounds for metaphysical dualism about "phenomenal properties" (spuriously, I think).
But today I want to stick to Bennett and Hacker's theme of the "mereological fallacy," the fallacy of attributing to parts properties had only by the whole. I'm not myself too wedded to the idea that B & H hold that committing the fallacy is equivalent to "nonsense," maybe they just think it's unhelpful, or vacuous; but I want to write more generally today.
Gerardo writes, "People do ascribe mental terms to things that are not persons (i.e. to corporations as in 'Microsoft believes that...,' to machines and robots as in 'it sees and recognizes visual patterns,' to brains and brain parts, to animals), and people usually understand each other..." The definition of a "person," it seems to me (and I am making no effort to defend or even necessarily represent B & H here) just is "any being that takes intentional predicates," I take that to be the idea of operationalist approaches.
Viewed in this light, Gerardo's list of examples turns out to be quite heterogenous. Long-time readers of this blog (are you the one?) know that I take animals to be paradigmatic examples of persons: dogs, say, believe and desire and hope and fear etc., and the semantics of those predicates are the same, on my view, when applied to humans and to dogs and many other non-human animals (contra Davidson, by the way). Similarly with possible conscious androids: as a materialist, since I am committed to the view that human consciousness is a feature of physical properties that humans possess, ipso facto an artifact that had those properties would be conscious (but computers ain't it; the relevant properties are not merely computational - John Searle gets this right). Corporations are a stranger example (remember Ned Block's Chinese nation example), and I'm not sure what I think about that: my intuition is pretty strong that animals and possible conscious artifacts are conscious as bodies (I'm pretty sure I have a physical criterion of personal identity), and that a "being" composed of unconnected parts maybe could not have consciousness in this (admittedly vague) sense. Still, a corporation, or nation, or team, is after all a kind of body, so there is at least room for discussion there. So "brains and brain parts" seem to be the odd man out on the list.
Years ago when I first heard about functionalism my first naive response was "But persons don't have any function!" Maybe that's right: the person is an embodied being with preferences and aversions. (Are the values a hard part for the possible conscious artifact? Maybe yes.) I'm thinking about the difference between the telos of the car battery (starting the car) and the telos of the car (driving people around). You might say that all there is is just nested functionality, all the way up and all the way down (I read William Lycan this way). If that's how you see it, then maybe car batteries and brains have as much claim to personhood as cars and humans. Dennett says that a thermostat comes under intentional description: it believes that it is presently too cold, or that it is not. On this version of operationalism the only problem with saying, "My car battery doesn't like the cold weather" as a further explanation of my claim that "My car doesn't like the cold weather" is that there is some (informal) threshold of obtuseness when it's just not necessary anymore to replace physical predicates ("It's frozen") with intentional ones ("It's unhappy"). And maybe that's right.
What I take B & H to be claiming is that there are no neural correlates of intentional states. There is not some brain state that embodies my belief that Paris is capital of France, or my desire for some chocolate. That is the sense of the mereological fallacy: that it is a mistake (a mistaken research paradigm) to search for neural correlates of intentional states. This goes to my problem with representational models of mind. It doesn't help to explain how it is that I believe that Paris is the capital of France to claim that there is some formal token of the proposition "Paris is the capital of France" inside my body somewhere. I don't think that intentional states are neural states at all. I think that they are states of embodied persons. What kind of "states"? (John Heil does good work on the metaphysics of "states," "properties," and so on.) Right now I'm thinking that "intentional states" are relations between persons and their environments (this is a type, I think, of externalism/"wide content").
Anyway I'm off to do the recycling and sign my daughter up for swimming lessons.