Two arguments that seem persuasive to me lead to a contradiction. The contradiction is resolved when we appreciate that "mind" is a complex concept, and that we are faced with two metaphysical problems, not one. Getting clear on this clears up a whole lot of confusion in the philosophy of mind.
The two arguments are owed to Alan Turing and to John Searle, respectively. Turing makes the basic operationalist case: confronted with a system, any system, that behaved (reacted to us, interacted with us) in a way that was indistinguishable from a rational person (for example, a computer terminal that could converse rationally and sensibly), we would have no option but to consider that system rational ("minded," if you will). The claim is deep and strong: granting consistently rational behavior, to deny rationality to such a system would be the equivalent to denying that another normally-behaving human was rational; there would be no evidence to support such a denial. More strongly still, the only meaning we can assign to the concept "rational" must be pegged to some observation or another (Wittgenstein's behavioristic point).
Searle's "Chinese Room" argument, on the other hand, appears to demonstrate that the mind cannot be (merely) a formal rule-governed, symbol-manipulating device. The non-Chinese speaker in the Chinese Room follows a set of formal rules ("when a squiggle like this is entered, you output a squoggle like that"), and these rules are such that the Chinese-understanding person inputing Chinese-language questions is receiving appropriate Chinese-language answers as output. But it seems persuasive that neither the homonculus inside the Room nor the Room as a whole has any idea of what is being said: like a computer, the Chinese Room understands nothing whatsoever.
How can the seemingly mutually-contradicting intuitions that are motivated by the two arguments be reconciled? Here's how: Turing is talking about intentional mental attributions: psychological descriptions using terms such as "belief" and "desire." The meaning of intentional psychological descriptions and explanations is necessarily grounded in observables. Intentionality must be understood operationally, and Turing is right that any system that can be successfully understood using intentional predicates is an intentional system: that's just what intentionality is. Searle, meanwhile, is talking about phenomenal mental attributions (consciousness). The meaning of phenomenal terms must be grounded in intersubjective phenomena, just like intentional terms (or any terms in language), but there is something more (Wittgenstein: an inexpressible something more) whereas to be in an intentional "state" is entirely public. Only conscious beings "know" anything at all in Searle's sense. Wittgenstein too is skeptical of the possibility of "zombies." "Just try - in a real case - to doubt someone else's fear or pain," he writes in the Philosophical Investigations Section 303. And now we have sailed out into somewhat deeper water.
Wednesday, May 14, 2008
Subscribe to:
Post Comments (Atom)
No comments:
Post a Comment