Thursday, July 10, 2008

Behaviorism and the Mereological Fallacy

Gerardo Primero is interested in the question of whether or not intentional predicates applied to brains are meaningful. His idea is that there is a difference between saying something that is meaningful, but wrong (his analysis), and saying that such predications are nonsense (devoid of meaning). His take on Wittgensteinian analyses (such as that of Bennett and Hacker in Philosophical Foundations of Neuroscience) is that they claim that e.g. saying that there are images or sentences or other forms of representation "in the brain" is meaningless. So the question is whether or not Gerardo has a good criticism of Bennett and Hacker in this regard.
Note that this discussion is to a large extent a version of the oldest and biggest problem for behaviorism, that is that we have strong intuitions that phenomenal experience is distinct from outward behavior, as in the case of the man who pretends he is in pain when he is not. Surely this exposes behaviorism as incomplete at best, if the behaviorist claims that "Sally likes chocolate" does not entail a reference to the quality of her gustatory sensations when she puts the chocolate in her mouth? Daniel Dennett is trying to untangle this in The Intentional Stance (unsuccessfully, I think), and David Chalmers in The Conscious Mind takes the alleged possibility of "zombies," operational duplicates of conscious persons who have no conscious experience, as grounds for metaphysical dualism about "phenomenal properties" (spuriously, I think).
But today I want to stick to Bennett and Hacker's theme of the "mereological fallacy," the fallacy of attributing to parts properties had only by the whole. I'm not myself too wedded to the idea that B & H hold that committing the fallacy is equivalent to "nonsense," maybe they just think it's unhelpful, or vacuous; but I want to write more generally today.
Gerardo writes, "People do ascribe mental terms to things that are not persons (i.e. to corporations as in 'Microsoft believes that...,' to machines and robots as in 'it sees and recognizes visual patterns,' to brains and brain parts, to animals), and people usually understand each other..." The definition of a "person," it seems to me (and I am making no effort to defend or even necessarily represent B & H here) just is "any being that takes intentional predicates," I take that to be the idea of operationalist approaches.
Viewed in this light, Gerardo's list of examples turns out to be quite heterogenous. Long-time readers of this blog (are you the one?) know that I take animals to be paradigmatic examples of persons: dogs, say, believe and desire and hope and fear etc., and the semantics of those predicates are the same, on my view, when applied to humans and to dogs and many other non-human animals (contra Davidson, by the way). Similarly with possible conscious androids: as a materialist, since I am committed to the view that human consciousness is a feature of physical properties that humans possess, ipso facto an artifact that had those properties would be conscious (but computers ain't it; the relevant properties are not merely computational - John Searle gets this right). Corporations are a stranger example (remember Ned Block's Chinese nation example), and I'm not sure what I think about that: my intuition is pretty strong that animals and possible conscious artifacts are conscious as bodies (I'm pretty sure I have a physical criterion of personal identity), and that a "being" composed of unconnected parts maybe could not have consciousness in this (admittedly vague) sense. Still, a corporation, or nation, or team, is after all a kind of body, so there is at least room for discussion there. So "brains and brain parts" seem to be the odd man out on the list.
Years ago when I first heard about functionalism my first naive response was "But persons don't have any function!" Maybe that's right: the person is an embodied being with preferences and aversions. (Are the values a hard part for the possible conscious artifact? Maybe yes.) I'm thinking about the difference between the telos of the car battery (starting the car) and the telos of the car (driving people around). You might say that all there is is just nested functionality, all the way up and all the way down (I read William Lycan this way). If that's how you see it, then maybe car batteries and brains have as much claim to personhood as cars and humans. Dennett says that a thermostat comes under intentional description: it believes that it is presently too cold, or that it is not. On this version of operationalism the only problem with saying, "My car battery doesn't like the cold weather" as a further explanation of my claim that "My car doesn't like the cold weather" is that there is some (informal) threshold of obtuseness when it's just not necessary anymore to replace physical predicates ("It's frozen") with intentional ones ("It's unhappy"). And maybe that's right.
What I take B & H to be claiming is that there are no neural correlates of intentional states. There is not some brain state that embodies my belief that Paris is capital of France, or my desire for some chocolate. That is the sense of the mereological fallacy: that it is a mistake (a mistaken research paradigm) to search for neural correlates of intentional states. This goes to my problem with representational models of mind. It doesn't help to explain how it is that I believe that Paris is the capital of France to claim that there is some formal token of the proposition "Paris is the capital of France" inside my body somewhere. I don't think that intentional states are neural states at all. I think that they are states of embodied persons. What kind of "states"? (John Heil does good work on the metaphysics of "states," "properties," and so on.) Right now I'm thinking that "intentional states" are relations between persons and their environments (this is a type, I think, of externalism/"wide content").
Anyway I'm off to do the recycling and sign my daughter up for swimming lessons.

4 comments:

  1. [This comment is recycled from a post over at Brain Hammer.]

    To properly engage Hacker one has to engage the Wittgensteinian notion that criteria justify the use of a term (e.g., 'read') because they are constitutive of its meaning. We (English speakers, that is) say that someone reads when she behaves in certain ways because behaving in those ways is what we call 'reading.' And if that's what 'reading' means for us, then that's what reading is for us. Accordingly, those behaviors are criteria for reading.

    If those criteria are not met by someone, then there is no justification for saying that she is reading. On the other hand, if it is impossible for those criteria to be met by something, then it is not merely false that that thing is reading, it makes no sense to say of it that it reads.

    Hacker argues that it is impossible for a brain or any of its parts to meet the criteria for reading, etc. It is impossible because brains don't behave. That is, brains do not and cannot act in any way that is even remotely similar to human behavior. Concerning reading, brains cannot utter the words because brains don't have vocal chords, tongues, etc. Brains cannot follow the words with their eyes because they don't have eyes. And so on.

    Despite these facts, Dennett claims that brains 'behave' in a way that is similar enough to human behavior to warrant an extended use of psychological predicates. This, it seems to me, is patently false, even absurd.

    ReplyDelete
  2. Yes I quite agree (although I'm not clear on the degree of Dennett's guilt).

    ReplyDelete
  3. (Anderson) Note that this discussion is to a large extent a version of the oldest and biggest problem for behaviorism, that is that we have strong intuitions that phenomenal experience is distinct from outward behavior, as in the case of the man who pretends he is in pain when he is not. Surely this exposes behaviorism as incomplete at best, if the behaviorist claims that "Sally likes chocolate" does not entail a reference to the quality of her gustatory sensations when she puts the chocolate in her mouth?
    (Gerardo) I don't think that your description does justice to behaviorism. First of all, who is the behaviorist whose thinking you're talking about here? Skinner? Quine? Ryle? They're very different, and none of them really did what you've just said (a kind of reductionism to overt behavior). Ryle talked about dispositions to overt behavior, but he also recognized episodic mental events (he never reduced all mental terms to overt dispositions, as many think). Skinner recognized private stimuli and responses, and said that they followed the same learning principles as overt behaviors. Quine advocated an epistemic kind of behaviorism ("language is learned through observation of overt behavior... linguistics has to be behavioristic"). Without specifying a real behaviorist and analyzing his real arguments, it's always easy to beat a strawman of behaviorism.

    Best Regards,
    Gerardo.

    ReplyDelete
  4. A.B.: "know that I take animals to be paradigmatic examples of persons: dogs, say, believe and desire and hope and fear etc., and the semantics of those predicates are the same, on my view, when applied to humans and to dogs and many other non-human animals (contra Davidson, by the way)."

    Perhaps I am not enough of a regular reader to know what "paradigmatic example" means, but I would think that persons are what make up the paradigm of "person" and dogs do not. Dogs have qualities that are very much like human beings, but it would be like saying that autistic persons make up the paradigmatic example of "person". In a way, I can see a point, in that by testing the limits of the category, they make some essential-like property of the category more plain, but this is not paradigmatic, unless one wants to think of a paradigm in the Neo-Platonic sense.

    As far as Davidson and animals, perhaps he is inconsistent and you are aware of some quotation where he states himself categorically, but by my reading he walks a careful border. Take for instance his stance in "Three Varieties of Knowledge":

    "Belief is a condition of knowledge. But to have a belief it is not enough to discriminate among aspects of the world, to behave in different ways in different circumstances; a snail or a periwinkle does this. Having a belief demands in addition appreciating the contrast between belief and false, between appearance and reality, mere seeming and being. We can, of course, say that a sunflower has made a mistake if it turns toward an artificial light as if it were the sun, but we do not suppose the sunflower can think it has made a mistake, and so we do not attribute belief to the sunflower. Someone who has a belief about the world - or anything else - must grasp the concept of objective truth, or what is the case independent of what he or she thinks."

    Now though Davidson makes a rather solid claim against sunflowers, I am unsure what he would say if applied to Dogs. Dogs certainly, and constantly behave as if they understand what a "mistake" in belief is. The criteria the Davidson uses here seems to be an important one, but one that opens the door to understanding other animals as belief holders. If we pass this on to other mental predicates, hoping, fearing, etc. The conditions for the ascription are those that drive its substance. It does not in most cases (but not ALL) prove helpful to say that "the sunflower realized it was wrong" but it is intimately helpful to say that "the dog is afraid of the man" or "the ape hopes its child is still alive". Examining the conditions and contexts brings out the meaningfulness. Categorical analysis does not.

    I don't ascribe so much to the language-oriented distention that Davidson loves (Rorty is even more fierce about this than he). I think that Davidson's approach must be tempered by a triangulating notion of affective imagination, one which presents the substance of our attributions to animals and things.

    I argue this here, in three of four parts: http://kvond.wordpress.com/2008/05/28/the-trick-of-dogs-etiologic-affection-and-triangulation-part-i-of-iv/

    As for Hacker and company, no, they means "nonsense" as in "it has no role in the language game" kind of nonsense. They mean square peg round hole kind of nonsense. This is the paramount difficulty in any kind of "Those are nonsense uses of language!" assertions. Clearly they are not "nonsense" because those games with words are meaningful to all kinds of people in all kinds of situations. These square pegs seem to fit rather nicely (not perfectly though) into those round holes. The mistake is the think that language ever can work where pegs and holes necessarily match up. There is no (hidden) Tractatus-like connection which keeps language uses "x" and "y" from being meaningful to their users. What attackers of the "wrongful attribution" theory have to overcome is that IF such an attribution is meaningful in context, IF it carries on with the interaction, then it is a meaningful description. Under such an understanding, Dennett's Intentional Stance meets nicely with Wittgenstein's Sec. 154 "Now I can go on".

    ReplyDelete