Tuesday, May 27, 2008
Which Animals Have Minds?
Some of us (OK, I) have an intuition that dogs, say, are accurately described as having beliefs and desires, but crickets, say, are not. What about carp? There seems to be a threshold problem. How do we fix the set of beings that come under intentional psychological explanation? On a Wittgensteinian view this is not a problem, because psychological descriptions are necessarily descriptions of publicly observable phenomena. Like life, consciousness is either there or it is not. There is the well-worn objection that it is at least conceivable that there could be a being that behaved as if it were conscious, but that was not. David Chalmers, for one, has built an entire philosophical position on this claim, so debunking it would be significant. Wittgenstein's response is interesting if perhaps not knock-down: he suggests that in fact such a being is not conceivable, that we are confuting reference and use when we say that it is (we can name a round square, but we can't conceive one). As with many of Wittgenstein's arguments, this one seems awfully fast. Still, I feel its pull: for example, I suspect that a disembodied mind is actually inconceivable, even though people claim to be conceiving of them all the time. Reading the Investigations, it may be that Wittgenstein is more interested in the property of "being alive," as a property that he might take seriously, while he may be more skeptical about the claim that "having a mind" is a property at all.
A Problem for Evolutionary Psychology
I've just sent off my chapter "Real Behaviorists Don't Wear Furs" for Nandita and Vartan's book Animals in Human Signification or something like that, and I have two little chunks of argument that emerged this morning pursuant to that, I'll split them into two posts, this one and the next.
There is a mistake, I think, in the premise of evolutionary psychology. According to a strong version of this view, adaptationist explanations of behavior (explanations that appeal to the fitness-conferring value of various behaviors) replace intentional explanations (explanations that take intentional states to be causal, as in, "He went to the river because he wanted some water"). (Let me note in passing that to whatever degree evolutionary psychology is a valid way to explain behavior, it is equally valid when applied to humans as when applied to other species; the evolutionary psychologist has no grounds for claiming that humans have "minds" while other species do not. But that is not my point today.) The mistake here is to confuse the "why" with the "how." We are in need of various explanations. One thing that needs to be explained is why the organism behaves the way it does. Adaptationist explanations may serve to satisfy that explanatory need. But how the organism manages to achieve the behavior is a different explanandum entirely.
Here's the little bit of argument that came to me this morning: An adaptationist explanation might explain how a tiger came to have a sharp claw. That doesn't mean that the sharpness of the claw itself is no longer of interest to a zoologist. The sharpness must be referenced if we are to understand how the tiger satisfies its nutritional requirements. It is an indispensable part of the "how" explanation.
Adaptationist explanations, as "why" explanations, lie "upstream" from "how" explanations. As Aristotle pointed out long ago, there are in fact various types of causal explanation. No one would think that the claw's sharpness was causally irrelevant to the tiger's functioning. But evolutionary psychologists (Dawkins) make the same mistake when they suggest that the intentional properties of psychological traits are causally irrelevant on the grounds that the real cause is genetic replication. Thus to explain that the dog is adapted to love you doesn't constitute any kind of argument that the dog doesn't really love you. (Same as in the infant's case.)
There is a mistake, I think, in the premise of evolutionary psychology. According to a strong version of this view, adaptationist explanations of behavior (explanations that appeal to the fitness-conferring value of various behaviors) replace intentional explanations (explanations that take intentional states to be causal, as in, "He went to the river because he wanted some water"). (Let me note in passing that to whatever degree evolutionary psychology is a valid way to explain behavior, it is equally valid when applied to humans as when applied to other species; the evolutionary psychologist has no grounds for claiming that humans have "minds" while other species do not. But that is not my point today.) The mistake here is to confuse the "why" with the "how." We are in need of various explanations. One thing that needs to be explained is why the organism behaves the way it does. Adaptationist explanations may serve to satisfy that explanatory need. But how the organism manages to achieve the behavior is a different explanandum entirely.
Here's the little bit of argument that came to me this morning: An adaptationist explanation might explain how a tiger came to have a sharp claw. That doesn't mean that the sharpness of the claw itself is no longer of interest to a zoologist. The sharpness must be referenced if we are to understand how the tiger satisfies its nutritional requirements. It is an indispensable part of the "how" explanation.
Adaptationist explanations, as "why" explanations, lie "upstream" from "how" explanations. As Aristotle pointed out long ago, there are in fact various types of causal explanation. No one would think that the claw's sharpness was causally irrelevant to the tiger's functioning. But evolutionary psychologists (Dawkins) make the same mistake when they suggest that the intentional properties of psychological traits are causally irrelevant on the grounds that the real cause is genetic replication. Thus to explain that the dog is adapted to love you doesn't constitute any kind of argument that the dog doesn't really love you. (Same as in the infant's case.)
Tuesday, May 20, 2008
Rule-following and "Rule-following"
Many processes can be modeled mathematically. Hurricanes, gene dispersion, baseball statistics, insect wingbeats, galaxy formation: really, the list is endless. All of these processes can be said to "follow rules." Computers follow rules in this sense. But when we say about various natural processes that they "follow rules," it is important to keep in mind that the phrase is here used figuratively: there is no conscious rule-following going on, the way there is, say, after you have just taught me a card game and I try to play it correctly. We often use this figurative sense of "rule-following" when describing processes in our own bodies. The retina, for example, is an on-board computer of a sort that measures the amplitude of light coming in to the eye and "encodes" this "information" for transmission to the brain. "Encodes" and (importantly) "information" are also figurative terms in this context. The eyeball no more literally (intentionally) encodes things than, say, a tree encodes its age in its tree-rings.
Brain processes are "rule-following" only in this figurative sense. Stomachs digest food, but they don't eat lunch. Persons eat lunch. Brains compute (or, they do whatever it is that they do that is the equivalent of digestion in the stomach: our Cartesian error is in the way of our seeing what exactly that is). Persons think. I can't think without my brain any more than I can eat lunch without my stomach, but that doesn't mean that there's a little person in my brain thinking any more than it does that there's a little person in my stomach eating. And the processes going on in my brain are no better explained by saying that there's a person in there "interpreting" than are my digestive processes by positing a micro-gourmand. Inside my head there's lots of "rule-following" going on, but there is no rule-following. Actual rule-following is done by persons, out in the world. Thus the savant is "rule-following" (computing with his brain), but he is not rule-following (thinking with his "mind').
(Thanks to Kevin Vond for a lively exchange on this topic. See the comments below and go to Kevin's website for more.)
Brain processes are "rule-following" only in this figurative sense. Stomachs digest food, but they don't eat lunch. Persons eat lunch. Brains compute (or, they do whatever it is that they do that is the equivalent of digestion in the stomach: our Cartesian error is in the way of our seeing what exactly that is). Persons think. I can't think without my brain any more than I can eat lunch without my stomach, but that doesn't mean that there's a little person in my brain thinking any more than it does that there's a little person in my stomach eating. And the processes going on in my brain are no better explained by saying that there's a person in there "interpreting" than are my digestive processes by positing a micro-gourmand. Inside my head there's lots of "rule-following" going on, but there is no rule-following. Actual rule-following is done by persons, out in the world. Thus the savant is "rule-following" (computing with his brain), but he is not rule-following (thinking with his "mind').
(Thanks to Kevin Vond for a lively exchange on this topic. See the comments below and go to Kevin's website for more.)
Wednesday, May 14, 2008
Reconciling Turing and Searle
Two arguments that seem persuasive to me lead to a contradiction. The contradiction is resolved when we appreciate that "mind" is a complex concept, and that we are faced with two metaphysical problems, not one. Getting clear on this clears up a whole lot of confusion in the philosophy of mind.
The two arguments are owed to Alan Turing and to John Searle, respectively. Turing makes the basic operationalist case: confronted with a system, any system, that behaved (reacted to us, interacted with us) in a way that was indistinguishable from a rational person (for example, a computer terminal that could converse rationally and sensibly), we would have no option but to consider that system rational ("minded," if you will). The claim is deep and strong: granting consistently rational behavior, to deny rationality to such a system would be the equivalent to denying that another normally-behaving human was rational; there would be no evidence to support such a denial. More strongly still, the only meaning we can assign to the concept "rational" must be pegged to some observation or another (Wittgenstein's behavioristic point).
Searle's "Chinese Room" argument, on the other hand, appears to demonstrate that the mind cannot be (merely) a formal rule-governed, symbol-manipulating device. The non-Chinese speaker in the Chinese Room follows a set of formal rules ("when a squiggle like this is entered, you output a squoggle like that"), and these rules are such that the Chinese-understanding person inputing Chinese-language questions is receiving appropriate Chinese-language answers as output. But it seems persuasive that neither the homonculus inside the Room nor the Room as a whole has any idea of what is being said: like a computer, the Chinese Room understands nothing whatsoever.
How can the seemingly mutually-contradicting intuitions that are motivated by the two arguments be reconciled? Here's how: Turing is talking about intentional mental attributions: psychological descriptions using terms such as "belief" and "desire." The meaning of intentional psychological descriptions and explanations is necessarily grounded in observables. Intentionality must be understood operationally, and Turing is right that any system that can be successfully understood using intentional predicates is an intentional system: that's just what intentionality is. Searle, meanwhile, is talking about phenomenal mental attributions (consciousness). The meaning of phenomenal terms must be grounded in intersubjective phenomena, just like intentional terms (or any terms in language), but there is something more (Wittgenstein: an inexpressible something more) whereas to be in an intentional "state" is entirely public. Only conscious beings "know" anything at all in Searle's sense. Wittgenstein too is skeptical of the possibility of "zombies." "Just try - in a real case - to doubt someone else's fear or pain," he writes in the Philosophical Investigations Section 303. And now we have sailed out into somewhat deeper water.
The two arguments are owed to Alan Turing and to John Searle, respectively. Turing makes the basic operationalist case: confronted with a system, any system, that behaved (reacted to us, interacted with us) in a way that was indistinguishable from a rational person (for example, a computer terminal that could converse rationally and sensibly), we would have no option but to consider that system rational ("minded," if you will). The claim is deep and strong: granting consistently rational behavior, to deny rationality to such a system would be the equivalent to denying that another normally-behaving human was rational; there would be no evidence to support such a denial. More strongly still, the only meaning we can assign to the concept "rational" must be pegged to some observation or another (Wittgenstein's behavioristic point).
Searle's "Chinese Room" argument, on the other hand, appears to demonstrate that the mind cannot be (merely) a formal rule-governed, symbol-manipulating device. The non-Chinese speaker in the Chinese Room follows a set of formal rules ("when a squiggle like this is entered, you output a squoggle like that"), and these rules are such that the Chinese-understanding person inputing Chinese-language questions is receiving appropriate Chinese-language answers as output. But it seems persuasive that neither the homonculus inside the Room nor the Room as a whole has any idea of what is being said: like a computer, the Chinese Room understands nothing whatsoever.
How can the seemingly mutually-contradicting intuitions that are motivated by the two arguments be reconciled? Here's how: Turing is talking about intentional mental attributions: psychological descriptions using terms such as "belief" and "desire." The meaning of intentional psychological descriptions and explanations is necessarily grounded in observables. Intentionality must be understood operationally, and Turing is right that any system that can be successfully understood using intentional predicates is an intentional system: that's just what intentionality is. Searle, meanwhile, is talking about phenomenal mental attributions (consciousness). The meaning of phenomenal terms must be grounded in intersubjective phenomena, just like intentional terms (or any terms in language), but there is something more (Wittgenstein: an inexpressible something more) whereas to be in an intentional "state" is entirely public. Only conscious beings "know" anything at all in Searle's sense. Wittgenstein too is skeptical of the possibility of "zombies." "Just try - in a real case - to doubt someone else's fear or pain," he writes in the Philosophical Investigations Section 303. And now we have sailed out into somewhat deeper water.
Subscribe to:
Posts (Atom)