Tuesday, July 1, 2008

Is Your Brain Somebody?

Gerardo Primero, a psychologist in Buenas Aires, has been corresponding with me via e-mail. He is studying Wittgenstein and wanted to talk about Bennett and Hacker's 2003 book Philosophical Foundations of Neuroscience. That book takes a Wittgensteinian approach (P. M. S. Hacker is one of the leading philosophical interpreters of Wittgenstein) and builds an argument that a great deal of cognitive studies makes some version of the "mereological fallacy," the fallacy of attributing to the parts of a thing properties that are had only by the whole. Specifically "persons," who are full embodied beings, think, dream, desire, imagine, and so forth, whereas much philosophical psychology attributes these intentional states to brains, to consciousness, to memory, and so forth. The idea is that just as I eat lunch, not my stomach, so too I think about the election, vs. my brain. Note that if this turns out to be right, that psychological predicates are applied to persons and not to brain states, then the metaphysical problem about how the physical properties of the nervous system "map on" to the semantic properties of the representations may be shown to be a pseudoproblem, in that intentional psychological descriptions just aren't descriptions of states of the brain. Mind does not necessarily = brain.
Let me quote a little from Gerardo's e-mail from Sunday: "I'm not convinced by Hacker's arguments....The problem with the 'mereological fallacy' is not that applying psychological terms to parts 'has no sense': it has sense, but it's scientifically unsound...While my argument is epistemic ("that's not a valid scientific explanation"), Hacker's argument is semantic ("that has no meaning at all").
There are a lot of directions we could go with this, but since Gerardo seemed to approach me for maybe a "philosopher's opinion," I'll talk some basic metaphysics and epistemology this afternoon. The issue is metaphysical, to my eye: there is a language about "properties," and so we want to get clear on what properties are, because it looks like we would need to do that to understand how the brain works (properties are causal). Specifically the "property" of interest in terms of the mind/body problem is the "intentional/semantic property." What is this? That bears some discussion, but note a basic issue: if you think that the semantic property is a property, but it's not a physical property, then you have signed on to some kind of metaphysical dualism. Descartes thought this way. He thought that any physical thing, being ultimately a mental representation, had the property of dubitability (could be unreal, an illusion), whereas the fact of thinking (of a "thinking substance") was indubitable, and this is one of his arguments for metaphysical dualism (sometimes called "substance dualism"). Disparate properties, disparate things. Which is fine, maybe, but recognize the committments that come with such a view: a) there are "things" that exist that are not part of the physical universe, and b) therefore, in this example of the more general metaphysical point, scientific psychology is impossible. I don't buy that. That is, I think that humans are part of physical nature through-and-through. And if "physicalism" means anything, it's got to mean that everything about humans that we can "explain" (whatever explanation is) we can explain in physical terms (just like the rest of nature). So a naturalist like myself has two options: 1) Try to understand "mental representation" and thus symbols and meaning in general in some kind of physical terms, or 2) try to eliminate representational content from the model of mind.
So, as to Gerard's distinction between "meaningful" and "explanatory," I would say that physicalists (we could here say materialists or naturalists, I'm not making any fine distinction) who are eliminativists (like Wittgenstein and Skinner) think that to the extent that "meaning" is not the same thing as "causal power" there isn't any such thing. Think of a behavioristic, anthropological account of the development of speech: the latter-day "semantics" of the words emerged out of the functional role of making that sound. It isn't true that all words function in the same way (that is, as symbols). This is what Wittgenstein means with the analogy of the locomotive controls: they all fit the human hand, but one opens a valve, one puts on a brake, etc.; it is a mistake to try to explain them all the same way.
If you can't explain the "mental" property without including something "mental" in the explanation, then you haven't explained mind. An "explanation" of mind would be the story of how semantic properties emerged from simpler, non-semantic properties. Mind from no-mind. So a problem with representations is that they already assume mind. Semantic content needs an interpreter. Or, the story about how something came to "mean" something can't already assume that "meaningfulness" exists - if you have to do that, you haven't succeeded in naturalizing the concept of "meaning."
There is a contingent who want to develop a natural theory of information. I would recommend starting with Fred Dretske's Knowledge and the Flow of Information. For myself, at this point I feel pretty convinced that there can't be any such thing as mental content, at all. Just wrong, root and branch. But note that there is a representational vogue underway amongst the cognitive scientists (or was two years ago).
Looked at this way, one can see that the problem with attributing mental states to brains isn't, I wouldn't say, meaningful but wrong (as Gerardo argues), but in fact not meaningful. They are pseudoexplanations because they don't turn out to even potentially explain anything: they're not even wrong. "When I remember her face, I have an image of her face." "I just gave myself a dollar." Both examples of the same mistake.
Finally for today, Gerardo wanted a little more on Wittgenstein vs. Moore. Moore tried to argue from "usage," that is, he argued that the claim "I know I have a hand" was a paradigmatic case of knowledge. Wittgenstein objected (in On Certainty) that there was no ordinary circumstance in which holding up one's hand and saying, "I know I have a hand" could have any purpose. W.'s point was that Moore made the mistake of continuing to play the game that was the cause of the confusion in the first place. In fact I neither know nor do not know whether I have a body; that's not really an example of a situation where the verb "to know" can serve any function.

9 comments:

  1. A.B.: "Note that if this turns out to be right, that psychological predicates are applied to persons and not to brain states, then the metaphysical problem about how the physical properties of the nervous system "map on" to the semantic properties of the representations may be shown to be a pseudoproblem, in that intentional psychological descriptions just aren't descriptions of states of the brain. Mind does not necessarily = brain."

    Interesting, wonderful post. Of note, what you describe above is essentially the position of philosopher Donald Davidson's Anomalous Monism. There are no psycho-physical laws which connect physical brain states with mental predicates. They are two independent causal descriptions (though they may look to overlap, no laws govern their co-incidence).

    ReplyDelete
  2. I would say that physicalists (we could here say materialists or naturalists, I'm not making any fine distinction) who are eliminativists (like Wittgenstein and Skinner) think that to the extent that "meaning" is not the same thing as "causal power" there isn't any such thing.

    Wittgenstein an 'eliminative materialist'? In what sense? He would certainly reject the sort of project the Churchlands are engaged in. And any similarity he has with Skinner-style behaviorism is superficial (Cf. Hacker's excellent article, "Wittgenstein and Quine: Proximity at Great Distance"). It's difficult to even include him with the so-called 'logical behaviorists'; for while he does hold that the criteria for the application of predicates are behavioral, he has no problem admitting mental states, images, etc. (contra Ryle, for example). Finally, it's unclear to me what you mean by claiming that meaning is the same thing as causal power. In the next sentence you say "Think of a behavioristic, anthropological account of the development of speech: the latter-day 'semantics' of the words emerged out of the functional role of making that sound." Wittgenstein was completely uninterested in the natural history of words. What is relevant is the rule, not how the rule came to be.

    ReplyDelete
  3. It's true that W. acknowledges qualitative experience, only he thinks that nothing can be said about it. It's also true that he would reject the Churchlands' "eliminativist" project. That is because the Churchlands still think in terms of mental content, information-processing, representation. I will stick to my characterization of W. as eliminativist: no "inner" language, no mental content, no "meaning" somehow under the surface of words. And this is the only meaningful sense of eliminativism. "Eliminativism" is not here the proposal that physiological explanation will replace psychological explanation (the Churchlands' view, in some moods).

    ReplyDelete
  4. I will stick to my characterization of W. as eliminativist: no "inner" language, no mental content, no "meaning" somehow under the surface of words.

    This sounds better to me.

    I would put it this way: There cannot be a private language (though there could be an 'inner' language in the sense of talking to oneself). Our words do not receive their meaning from any mental accompaniment. Though there can be, according to Wittgenstein, 'mental content' in the sense of an image (for example). And it can even be the purpose of a word to bring about such an image. Nevertheless, the image is not the meaning of the word.

    ReplyDelete
  5. (NN) And any similarity he has with Skinner-style behaviorism is superficial (Cf. Hacker's excellent article, "Wittgenstein and Quine: Proximity at Great Distance").
    (Gerardo) Why do you say that any similarity with Skinner is superficial? I don’t think so. See for example:
    W. F. Day. (1969). On certain similarities between the "Philosophical investigations" of Ludwig Wittgenstein and the operationism of B. F. Skinner.
    http://www.pubmedcentral.nih.gov/articlerender.fcgi?artid=1338612
    Costall, A. (1980). The limits of language: Wittgenstein's later philosophy and Skinner's radical behaviorism. (Not online)

    Best Regards,
    Gerardo.

    ReplyDelete
  6. (Anderson) Looked at this way, one can see that the problem with attributing mental states to brains isn't, I wouldn't say, meaningful but wrong (as Gerardo argues), but in fact not meaningful. They are pseudoexplanations because they don't turn out to even potentially explain anything.
    (Gerardo) I’m still unconvinced by your argument. I agree with your account of “meaning” as functional role. The meaning of an expression is a correlate of understanding, it is what one understands when one understands the expression and knows what it means, and the criteria of understanding an expression fall into three broad kinds: correct use (i.e. use in accord with the established rules for the use of the expression), giving correct explanations of the meaning of the expression in context, and responding appropriately to the use of the expression by others. And, as I see, the ascription of mental terms to things that are not persons fill those criteria. People do ascribe mental terms to things that are not persons (i.e. to corporations as in “Microsoft believes that...”, to machines and robots as in “it sees and recognizes visual patterns”, to brain and brain parts, to animals), and people usually understand each other without any problem, filling all the criteria for understanding and meaning (they use the rules for the use of the expression, they give correct explanations of the meaning, they respond appropriately to other’s expressions). So what’s Hacker’s argument to say that those expressions are “nonsense”? We may ask: how do we know what’s the rule, what's the limit of application of a term, such that outside that limit is has no sense? Certainly some people do apply the term to things other than whole persons, so empirical research of the uses of the psychological terms is not the kind of evidence that Hacker may expose in his favour. So, what's the evidence? I see Hacker as trying to “prescribe” a language game while saying that he’s just “describing” it.
    So I’d say that they are meaningful. But the explanations have other requirements. I agree with your criterion: “An explanation of mind would be the story of how semantic properties emerged from simpler, non-semantic properties”. Explanations that ascribe mental properties to the brain doesn’t fill this criterion. So, at the end, I think it’s “meaningful” but it’s “wrong” (in the case that the speaker’s goal is to give a scientific explanation of a mental term: obviously it may not be wrong for other practices).

    Best Regards,
    Gerardo.

    ReplyDelete
  7. Hello, very interesting your blog.
    I do not think anyone has yet proved the truth of the propositions of Physicalism. A feeling I can identify with a correlative physiological, but the physiological and the correlative does not provide information about the incident. In contrast, the very fact has more information. One learns more about a feeling that living in particular noting its physiological correlates.A greeting.

    ReplyDelete
  8. I'm showingmy students how to post a comment right now. (And brains are not persons.)

    ReplyDelete
  9. Brains are not persons, but persons have a brain. My brain is not somebody, but I strongly consider myself (represented by my body) somebody. The same word mentions the fact that someBODY has a body, a body which, in my opinion, includes a brain. When we talk about somebody we are talking about persons not necessarily human beings. In many occasions we speak of a person as if he/she has left his/her body. For example, a mother in a state of coma, her body is still right there but when we address the person who carried her children and raised them we say it is not necessarily there. This is an example of a brain that is not somebody, the somebody referred as the mother had a brain but having a brain is not equal to being somebody. Brain is a muscle with very special abilities which many of them we do not understand quite yet. Speaking of interpretations, language, and symbols, I have to say that these interpretations and what they mean to an individual is all created by our special abilities. These symbols and meanings that construct language are tools for survival. We have learned to create symbols and to identify them with certain representation, meaning or information. The only thing that physically exist is the symbol because the representation, meaning or information acquired by our brains it is just simply that, a mere representation understood because we have learned it that way and because someone else had the imagination to give that specific representation to the symbol and passed it to others. I don’t think we can eliminate the meaning of symbols because it is the way we can communicate, but I do think that this meaning does not exist in the physical world. Now you may ask in what world does it exist then and I would simply answer in this world were we as intelligent and reasoning animals gave certain representation to symbols and we have been doing this for a long time; even the most primitive individuals gave representation to natural events and objects such as hurricanes, earthquakes, animals, etc.

    ReplyDelete