Thursday, March 29, 2007

Real behaviorists don't wear fur

Today here at University of Puerto Rico Mayaguez, our English Department is hosting the annual meeting of the College English Association/Caribbean Chapter. The conference topic is "animals."I think I'm going to lay out my whole thesis for The Minds of Animals. Come to think of it that's impossible, because I'll get to speak for maybe fifteen minutes before a really random ten minutes or so discussion. So I need to focus.
Start with the idea of the semantics of (intentional) psychology. If "belief" and "desire" are nouns, subjects of sentences, then what are the referents of those words? There are several theories on offer. My view is that, regardless of which of the available leading theories of mind one chooses, there is a presumption that you mean for your theory to be a general theory of psychology, one that could be used to understand both the simplest and most complex instances of "mind" (Hume, the godfather of empiricist psychology, was insistent on this point: if it doesn't apply to dogs and babies, then it isn't basic enough.) Thus behaviorism, for example, tried to make the internal, unobservable referents go away: it is an example of an "eliminativist" view of mental representation. On a behaviorist semantics for psychological nouns, they refer to sets of behavioral tropes (thus "fear of dogs," "believes they have chocolate at El Amal," "happy," and so forth). On this view there is no argument for distinguishing psychological descriptions of humans from those of other non-human animals. Note that the behaviorist is deeply committed to a deflation of the human mind to something that is put together from relatively simple processes. (Not that I'm criticizing behaviorism: that's what I think is good about it. Or at least, I'm intrigued that behaviorists at least have a novel strategy for dealing with "mental representation" and "intentionality"). Finally, this is an example of an internal argument: I'm not interested in disproving behaviorism, only in the consistent treatment of the theory.
The same argument applies to an altogether different model of psychological explanation, that of what is now called "evolutionary psychology," or back in the day "sociobiology," what one can think of as a sort of (my coinage) adaptive determinism. The idea is that the cause of the behavior (and for that matter of the thought or of the feeling) is the operation of genetic or more generally evolutionary causes. Thus an article a few years back in the New York Times laid out a Steven Pinkerish argument more or less as follows: Your dog acts as if it loves you, thus soliciting an affectionate response from you, because the dog is adapted to living off of you: all is instinct. My brother-in-law confronted me with this article. The response is to point out that on any reasonable interpretation of evolutionary, one could replace the word "dog" in the story with "baby," and the argument would be equally good: of course your baby is adapted to elicit an emotional response from you, of course that involves "genetic programming," whatever that is. That doesn't mean that the vocabulary of "loves you" and "pays attention to you" and "has a relationship with you" is somehow an illusion. This is to confuse the "why" with the "how." More importantly we get the same result as before: for evolutionary psychology, as for behaviorism, someone who actually embraced the theory as the right account of the semantics of intentional explanation provides no reason to apply one interpretation to humans and another to non-human animals.
The situation is more complex when we look at cognitivism, post-behaviorist psychology that focusses on mental representation, language, and the formal organization of cognition. Chomsky argued that a defining trait of humans was a capacity for "generative grammar," the ability that the parts of language give to generate an indefinitely large (demonstrably infinite, in fact) set of potential sentences. Thus generative grammar could be interpreted as a reason to make a distinction between humans and non-human animals in terms of the mental states that we attribute to them respectively. Davidson refined the model of intentional states as "propositional attitudes," and he explicitly states that on his view, language-capable beings can be rightly said to possess beliefs, desires, hopes, etc. And Fodor and his followers think that the syntactical structure of language may be the bridge between the semantic content of the mental and some kind of formal physical properties (of the nervous system, say). The issue here is about the relationship between thought and language. My initial response is intuitive: thought and language are not, as the cognitivists would have it, a chicken-and-egg pair. Thought must predate language by a very long way. Nervous tissue is older than bone tissue: our neurons and those of invertebrates are more similar in structure than most of the rest of us. The question here is whether we will come to see "mental representation," somehow understood, as a ubiquitous and ineliminable feature of animals with nervous systems, or we will come to understand nervous system function in a way that no longer refers to mental content. I really don't know which way that will go, but I doubt that it will turn out that human thinking is different in any radical way from that of many non-human animals. As to the rejoinder that we can just look and see that humans are cognitively very special, my response is that we won't find out much more about that if we start off with an ill-conceived insistence on some sort of fundamental difference.

Wednesday, March 28, 2007

Instrumentalism, Meet Metaphysics

Lately I've been working on the problem of phenomenal properties, but (owing to covering Chapters 10 and 11 on Davidson and Dennett in John Heil's Philosophy of Mind in my class this week) I am today thinking about intentional properties. All of this talk about properties is sloppy, I don't think that there are any such things as phenomenal "properties" for example. If there are such things as formal properties or relational properties then maybe there are intentional properties, which would be some kind of those former.
Posing the problem, suppose that you were a physiologist studying the various organs of the body. You work with the stomach for a while, figuring it out, see the digestive system and how it functions. Moving along you'll eventually come to the nervous system, and it too will be described as a physical system with physical processes going on. But psychology has a way of talking about persons that does not seamlessly mesh with the way we talk about bodies and brains. (A crucial point to understanding mind-body relations is that intentional ("belief-desire") psychological ascriptions are made of whole people, with all of their fingers and toes and dogs and cars, and not of brains or parts of brains. Keeping that in mind clarifies the brain process/consciousness relation a lot.)
The problem is that ordinary intentional psychology ascribes mental states to persons, and these mental states have content. This involves us in the idea that something functions as a symbol with meaning. But when we as physiologists learn and describe the physical processes in organs, we (sense of "one," no I'm not a physiologist) don't (seem to) describe these processes in terms of information processing or intentional content etc. The semantic properties of mind and language don't seem to map on to any physical properties. Davidson gives us some arguments that take the problem beyond the basic issue of whether or not correlative relations are constitutive of causal relations (I think maybe yes at this point): He points out that the way we're going to have to interpret any little bit of a person's mental content (literally, the meaning that is symbolized by their mental representations) is in terms of that content's interactions with a larger system of content, inference, implication, value, memory, etc. The "web of belief" Quine called it, my coinage is to speak of the "intentional economy" of a person. We will also have to ascribe to the person a minimum level of "rational" behavior, guessing that they will follow out the "rules" of the intentional economy more or less as some model "rational" person would. These two features of psychological interpretation, the "holistic" character of meaning and the rationality assumption, had no correlary in physical description, Davidson argued, and thus "psychophysical" laws were not possible.
I'm not so sure about the propositional nature of the "attitudes" as Davidson is, or even if I think a representational model of mind is correct (although it seems to be enjoying a vogue at the moment). But I do agree that intentional explanation may not be "translatable" or "analyzable" into physical explanation: that reductive materialism fails to account for the metaphysics of intentionality, because the properties that intentional descriptions pick out (that of believing that "x" and desiring that "y," say) are multiply realizable (or "supervenient") properties and thus may characterize some indefinitely wide set of physical systems. Thus it looks to me that intentional properties are some sort of formal or relational properties, like the intailments of geometry proofs. The benefit of this view is that the metaphysical problem turns out not to be specific to the mind-body relation. Rather there is a universal problem for materialism. It looks like it may also, by locating content in a "wide" way (that is, as an emergent property of the system's relationship with its environment), be useful as an eliminativist approach to representation. Just stir in a little Plato and everything's fine!

Tuesday, March 20, 2007

What Mary Couldn't Say

Frank Jackson's "Knowledge Argument" is motivated with this counterfactual: imagine Mary, the color-blind color vision specialist. Mary is an expert on all aspects of color perception: the physics of light waves, the absorption and reflectance properties of surfaces, etc., and the physiology of the eyeball, the function of the rods and cones, optic nerve, and color processing areas of the brain. In fact, let's say that Mary is the world's top expert on these matters: let's say that Mary knows the complete and correct physical description and explanation of color perception. And then remember: Mary is color-blind (perhaps her professional pursuits are a bit of overcompensation). She's never seen reds or blues. She doesn't know "what it's like" to have these phenomenal experiences. The argument purports to show that psychology cannot be naturalized. The complete and correct physical description, that includes all of the physical information, nonetheless lacks some (further) information, the knowledge of what it is like to see color. Thus if "naturalized" psychology means psychology within the bounds of physical explanation, the project founders (and Husserl was right that phenomenology is autonomous).
This argument fails to prove that psychology cannot be naturalized. The reason that it fails is that, while it is true that Mary can't say what it is like to see color, neither can anyone else. Phenomenal experience is beyond the limits of language. Ostensibly phenomenal terms ("He is in pain," "She can taste the chocolate") necessarily function on the basis of some kind of public criteria (I have been rehearsing these Wittgensteinian arguments through the last several posts. They are the "solipsism" passage in the Tractatus and the "private language" argument of the PI). In the present context note that what is at stake is what it would take to naturalize psychology, to incorporate psychology fully and seamlessly into an overall physicalist worldview. Thus the argument is essentially epistemological, not metaphysical. If it is right to say that "phenomenal description," meaning description of phenomenal experiences themselves as distinct from descriptions of things and properties in the physical world, is not possible (is conceptually incoherent), then Jackson's Knowledge Argument fails to show that psychology cannot be naturalized.

Saturday, March 17, 2007

A Two-Pronged Solution to the Problem of "Qualia"

Synthesizing the last two posts, then: I'm thinking that the right account of phenomenal properties is these two arguments:
1) Metaphysically speaking, the "theory of mind" for phenomenal properties is (my coinage) Hyperchauvinistic type-to-type identity. The community forsook reductive materialism because of the "chauvinism" problem, that is, the multiple realizability ("supervenience") of intentional ("belief/desire") states. My larger thesis at this point is that "mind" is a complex concept and that we need two theories for two different problems, one for intentionality and another for phenomenal properties. Whereas it is true that intentional states are multiply realizable, and thus true that reductive materialism fails as an overall theory of mind, the ultimate metaphysical analysis of phenomenal properties is that the qualitative experience is what it is because the specific body is what it is.
2) Epistemologically speaking, language cannot refer to phenomenal experiences. They are the ground in which description (of anything) is possible. Meanwhile, the referents (semantics) of phenomenal descriptions are necessarily public. Phenomenology, if that is conceived as the study of phenomenal experience as distinct from the study of the physical world (science), is not possible. It is beyond the limits of language. These arguments are essentially Wittgensteinian.

Friday, March 16, 2007

Phenomenal Properties are not Properties

Talk about properties implies a world, full of things, that have the properties (putting aside for this occasion the Platonic alternative that properties are the kind of transcendental entities that need not be instantiated by physical particulars). Part of this "ground of being" is phenomenal experience. This is what Wittgenstein meant (in the Tractatus) by his "solipsistic" identification of the self and the world (5.6-5.641). "The subject does not belong to the world: rather, it is a limit of the world." Subjective experience is the framework within which description of the world is possible (and yes Wittgenstein recognized that Kant had a very similar argument). Phenomenal experience constitutes the world, and the world constitutes phenomenal experience. Thus phenomenology is not possible: part of this argument is the private language argument of the PI, but a deeper thread here is the idea that phenomenal quality is the building material for description per se; ordinary description (of the object world) just is phenomenal description, at the limit of phenomenal description. This is the basis of Wittgenstein's rejection of the concept of mental representation, of mental content, altogether. On this view, there are no such things as "phenomenal properties." This seems right to me. (It looks like Spinoza is thinking along similar lines in his analysis of properties; and thanks to Professor John Heil for a substantive reply to my e-mail).

Tuesday, March 13, 2007

Functionalism and Zombies

David Chalmers's brief for metaphysical dualism (in The Conscious Mind) is sporting (like Berkeley is sporting), and I appreciate that. It does not turn out to be field-transforming, however (Wittgenstein, Putnam, Fodor: field-transforming, for better or for worse). Too much of the work is done by Chalmers's claim that we can conceive of zombies: humans who behave (function) exactly like other persons, but who have no phenomenal experience (who are quale-free). Chalmers takes this counterfactual to be, by itself, an argument for mind-body dualism. For myself, I doubt that one can conceive of a zombie. There are several possible lines of argument here. Today I am thinking of Wittgenstein's claim that the semantics of psychological descriptions (like the semantics of all descriptions) must be public, as language is essentially (necessarily) intersubjective. All phenomenal terms, then ("pain," "taste," "sensation"), have double lives: their nominal referents are qualia, but their conditions of use (W. would say their "grammars") are public. My view is that the mind-body problem is a complex problem, specifically that we need one theory to deal with intentional properties and another to deal with phenomenal properties. Functionalism is the kind of theory that deals with intentional properties, which I take to be some sort of formal, relational, "public" properties. For phenomenal properties we need reductive materialism. That is, phenomenal properties are not multiply realizable. When we say that David Chalmers, Flipper the Dolphin, My Favorite Martian, and Commander Data all like chocolate, we are referring to something that instantiates a particular functional role. It is not required (it does not follow) that they all have the same qualitative experience. If this is right, then the (alleged) conceivability of zombies does not constitute a proof of mind-body dualism, only of the inadequacy of functionalism.

Monday, March 12, 2007

On a Technical Problem for Behaviorism

Today's post is a little wonkish, I'll admit, so if behaviorism in general doesn't interest you you may want to scroll down and see some of the other posts. Supposedly a problem for behaviorism is that mental terms infest behavioral analyses of psychological descriptions, for example "He likes chocolate" means, among other things, that he'll move towards chocolate providing he doesn't want to fool us, and providing he doesn't hope to lose weight, etc. The mental terms don't wash out. I don't think that this is necessarily a problem. If the behaviorist was right to say that the psychological descriptions were properly analyzed as observable dispositions to behave, then all of the mental terms that keep cropping up in the analyses could also be so analyzed. We could do something along the lines of a Ramsey sentence, using DBx, for "Disposition to Behave x," instead of Fx. That doesn't mean that there is no problem. Carnap, as I recall, diagnosed the real problem here: the analyses into behavioral dispositions would result in indefinitely long, perhaps even infinitely long, descriptions, and that is a problem, one that defeats behaviorism, in fact. But it looks like functionalism is going to have the same problem. The function of any given state/process is going to be defined by that state's role in the larger functional economy, but that economy is open-ended. This is similar to Davidson's argument for the impossibility of psychophysical laws: the functional property, like the intentional property (same thing under a different description) is defined in reference to the larger functional/intentional "web," but the physical state is not.