Consider Hilary Putnam’s “Twin Earth” argument. Imagine, the argument goes, a Twin Earth: one that is molecule for molecule identical to this Earth. Of course you will have a Twin Earth doppelganger, molecule for molecule identical to yourself. On a reductive materialist account, granting that you have physically identical brain states it follows that you have psychologically identical mental states. For example you will have identical beliefs about water, the colorless, odorless substance that is ubiquitous in your parallel worlds. But imagine that there was just one difference between Earth and Twin Earth: on Earth water is composed of H20, but on Twin Earth water is composed of XYZ. Now everything about you and your doppelganger – both your physical states and your mental contents – are identical, but they’re not the same. Your beliefs may be correct and his/hers false (he/she may believe, as you do, that water is H20), but at a minimum they are about different things. Thus the actual meaning of an intentional state cannot be determined either by its physical properties or, startlingly, by its mental properties (its causal properties that can only be explained with reference to its contents).
Another gedanken from Putnam: Imagine an ant is walking in the sand, behaving normally for an ant for the usual ant reasons. It leaves a trail that resembles a portrait of Winston Churchill. Is this a representation of Winston Churchill? No it is not, even though it has the same physical properties as an actual sketch of Winston Churchill: how it came to be is relevant to its status as a representation. Something is not a representation by virtue of resemblance alone: Winston Churchill, after all, is not a representation of a sketch of himself or of an ant trail that looks like him.
Putnam’s famous slogan is “meaning just ain’t in the head.” The meaning of an intentional state is determined not only by physical and mental properties intrinsic to the subject (“in the subject’s head”) but also by facts about the environment and the subject’s relationship with the environment. This view is called “externalism,” and is also referred to as the “wide content” view: the view that intentional predicates refer to relationships between the nominal subject of intentional predication and their environment.
The representational theory of mind is deeply entrenched and some disambiguation is necessary to avoid outlandish interpretations of externalism and its claims. In this book I am examining the meanings of psychological predicates; in this chapter, intentional predicates. So for me the “meaning” in question is the meaning of words like “believes,” “desires,” “hopes,” “fears” and so on, in the sense of ontological reference: what ontological commitments do we make when we use these words? When Putnam says that “meaning isn’t in the head,” he isn’t talking about the meaning of the word “belief” (my topic), he’s talking about meaning in the sense that the alleged proposition that is the alleged object of the attitude is supposed to be about something: the proposition is the thing that is about something, on the representational realist view, and the proposition is represented “in the head.”
It is a consequence of externalism that beliefs (or any intentional states) aren’t about anything at all. That’s not how they work. They’re not even about the world, let alone internal representations of the world, because what they actually (ontologically speaking) are are relationships between persons and their environments, and relationships aren’t about anything, any more than physical objects or dispositions to behave are about anything. And that is exactly what naturalization requires: that there is no longer any reference to anything that “means” anything, meaning being a non-physical and therefore non-existent “property.” On a functional-role semantics words themselves aren’t about anything, not even proper nouns, so they can’t serve as magical vessels of meaning as they do on a traditional view. Language is a way that persons have of doing things - whole embodied persons situated in particular environments.
Here I am going beyond Putnam (undoubtedly an unwise thing to do!), because Putnam helps himself to mental content even as he argues that mental content alone is not sufficient to determine meaning. But if meaning, understood as “wide content,” turns out to be a description of relationships between persons and environments then there cannot be any mental content. There are no propositions transcendently emanating from Platonic heaven whether or not some person adopts an attitude towards them. There are only specific, concrete instances of states of affairs (I will use “states of affairs” as a more economical way of saying “relationships between persons and their environments;” this is consistent with the more standard use of the phrase as referring to ways the world could be) and goal-directed, token incidents of language-use.
Is Santa Claus a problem? Can’t one be thinking about Santa Claus even though Santa Claus is not part of the environment? No, Santa Claus is not a problem because Santa Claus is a cultural convention and that counts as part of the environment. One is thinking, in the Santa Claus case, not about a mythical character that is actual because mythical characters are not actual. That’s what “mythical” means. As for misrepresentation (as in the case of the five-year-old who believes that Santa Claus is actual), this is something that can only be demonstrated operationally.
What about a person or creature or what have you that exist only in the imagination of one individual? A stranger who appears in a dream, say? Here the right response is to remind ourselves that we are talking about the semantics of intentional predication, not about private experience. In fact the argument is that there can be no public description of private experience; remember Wittgenstein’s beetle-in-the-box. Note that, to the extent that “interpreting representations” is thought of as a process with a phenomenal component (after all, mustn’t there be “something that it’s like” to interpret a representation?), the putative experience of a representation (can one interpret a representation without experiencing it?) cannot be the criterion for proper intersubjective use of the word (see the discussion of phenomenal predicates in Chapter Three).
But surely dreams are evidence of mental representation? Aren’t dreams, in fact, just direct experiences of mental representation? No: although mental processes often involve experiences that seem similar to inspecting representations, remember that there is no explanatory value in literally positing mental representations. They don’t help to explain dreaming any more than they help to explain perceiving, remembering or imagining. In fact they make the model of the mental process considerably more complicated and difficult; a good reason for denying them. Berkeley thought that to clear up the Lockean mess of properties of objects-in-themselves, properties of objects to cause perceptions, and properties of perceptions, either the “mental” or the “external world” had to go. On that point he was right.
A little bit more disambiguation: my intentions here are perhaps deceptively arcane. I am focused on the metaphysics of the mind-body relationship. When I argue that, metaphysically speaking, there are in fact no such things as “mind” or “meaning,” I am arguing that traditional notions of those concepts are currently misleading us in our efforts to understand how the nervous system works. I don’t think any radical change in the way we talk is called for. In fact it is my view that a great deal of our psychological talk is ineliminable, and I think this goes for “mind,” “reference” and “meaning” just as much as for “belief” and “desire” and “beauty” and “justice,” and for the same reasons. If one accepts the present argument against mental representation, it still makes as much sense as it ever did to ask “What are you thinking about?” or “What is that book about?” or “What does that word mean?” The metaphysical question is about the proper semantic analysis of the way that we have always talked; that is nothing like a critique. As Wittgenstein said, “Philosophy changes nothing.” If the present proposal that intentional predicates pick out relationships between embodied persons and their environments is sound then that has always been what we have been doing. Jettisoning realism about mental representations is a substantial matter for cognitive science – I would stress the importance for developing experimental paradigms in brain science – but it’s hard to see how it could have any effect on popular usage of intentional terms.
To summarize, intentional predicates are applied to whole persons in particular environments, not to brains or nervous systems or neural processes or any physical part of persons. Perceiving, imagining, thinking, remembering and so on are the kinds of things that whole persons do. Among these intentional activities is interpreting. Symbols are interpreted by persons, and thus symbols must be located where persons are located: in the world. It follows that language, a symbol system, is a feature of the person’s environment as well: there are no symbols in the head. There is not, ontologically speaking, any such thing as meaning: there are only particular acts of persons negotiating their environments with use of sounds and symbols (and, following the mereological fallacy argument, this will be true of all language use including idle musings, random thoughts etc). The use of the word “meaning” as applied to intentional predication (“What is he thinking about?” “What does she know about gardening?” etc) is partially constituted by facts about the environment (pace Putnam). The natural semantic for intentional predicates is that they refer to relationships between individuals and their environments. If the account given here is persuasive there cannot be any such things as “mental representations.”
Sunday, December 26, 2010
Monday, December 20, 2010
Propositional Attitudes
When Bertrand Russell coined the phrase “propositional attitude” in his 1921 book The Analysis of Mind, he wasn’t thinking of “proposition” in the sense of a piece of language. He was thinking that what was represented was a situation or what would today most likely be called a “state of affairs,” a way the world could be. However several considerations led subsequent philosophers of mind to take a much more literal view of propositions as linguistic entities and as the objects of the attitudes.
Think of a tiger. Alright: now, how many stripes does your imaginary tiger have? Probably your “mental image” of a tiger turned out not to have a specific number of stripes. But a pictorial representation of a tiger would have to. Linguistic (formal) systems can include relevant information and need not contain irrelevant information, an obvious adaptive advantage over isomorphic (pictorial) representation. Formal representation was more congenial to the operationalists (such as computationalists) who wanted to develop functional models of cognition. Then in the late 1950s the linguist Noam Chomsky, critiquing behaviorism, made the enormously influential proposal that formal syntactical structure was “generative”: grammatical forms like “The_is_the_” allowed for multiple inputs and thus indefinitely many (linguistic) representations. Taken to its extreme this argument appears to show that it is necessary for a being to have a formal system for generating propositions to be capable of being in an intentional state at all. Finally the argument that it is propositions that have the property of meaning and that it is propositions that bear logical relations to each other made it seem that a linguistic theory of representation made progress on the mind-body problem.
On my view this is mistaken: the representational model of mind, by definition, locates “mental content” “in the head.” The basic metaphysical problem with the representational model has by now been made clear: “meaning,” what I have been calling the “intentional property” or the “semantic property,” is an irredeemably non-physical “property” that must be washed out of any naturalistic theory of mind. Once one recognizes that intentional predicates are predicated of whole persons – once one sees that positing mental representations necessarily commits the mereological fallacy – the matter is settled. However there is a tight network of arguments and assumptions about intentional states as “propositional attitudes” that will have to be disentangled to the satisfaction of readers who are disposed to defend representations.
The defender of propositional attitudes will start by pointing out that intentional states can only be individuated by virtue of their respective contents. What makes Belief X different from Belief Y is that X is about Paris and Y is about fish. This looks like a block to reduction: to correlate electrochemical activity in the brain, say, with Belief X, we must already be able to specify which belief Belief X is. We don’t have any way of getting from no-content to content (from the non-mental to the mental). This motivates the problem of mental causation: it appears that the content (meaning) of the proposition is what plays the causal role in the production of behavior: when told to proceed to the capital of France he went to Paris because he believed that “Paris is the capital of France.” All the explanation in physical (neurophysiological) terms one could possibly make wouldn’t be explanatory if it didn’t at some point reveal the meaning that is expressed in the proposition, and it doesn’t: “He believes that Paris is the capital of France” is not shorthand for a causal chain of neurophysiological processes.
Donald Davidson famously pointed out a further problem for the development of “psychophysical laws” (as he called them), laws that systematically identified brain processes with particular instances of intentional thought: no one propositional attitude could ever suffice as the discrete cause of a behavior because the causal implication that the propositional attitude has for the acting subject necessarily emerges from the logical relations that that “attitude” has with all of the other intentional states of the subject. Davidson’s phrase for this was “meaning holism,” the view that meaning (in the sense of explanatory psychological predication) is a property of networks of propositional attitudes, not of individual ones. There is not an assortment of individual intentional states in a person’s mind such that one or another might be the proximate cause of behavior; each person has one “intentional state,” the sum of the logically interrelated network of propositional attitudes.
Propositions are the bearers of logical relations with each other. Physical objects and processes, the argument goes, have no logical relations with each other. To believe that the drinking fountain is down the hall is to have the attitude towards the proposition “The drinking fountain is down the hall” that it is true, and to have a desire for water is to have the attitude towards “I have water” that one wants to make that proposition true. The explanatory utility of the intentional predicates – in this case the ability to make an inference from their coincidence that, all other things being equal, the subject will form an intention to walk to the fountain (that is, to make the proposition “I have walked to the fountain” true) – depends on the meaning of the propositions.
Of course making such an inference from the logical relationship between the two propositions also requires that we make a rationality assumption: we must assume about the subject that he, too, will appreciate these logical relations. That is part of the metaphysical “problem of rationality.” I cannot pretend that the distinction between the problem of rationality and the problem of representation is entirely clear-cut, but at this point I need only present a semantic for intentional predicates that locates logical relations out in the world rather than in the head; that will suffice to defeat the argument that mental content is necessary to explain the causal role of intentional states. The further metaphysical problem about the supposed lack of any correlation between the (contingent) physical relationships between states and processes in the body and the (necessary) logical relationships between propositions is dealt with in the discussion of the problem of rationality that is the second half of this chapter.
The terminal station for the line of argument that meaning is an indispensable property (and thus that representations are an ineliminable feature) of intentional explanation is Platonic realism about propositions. On this view, their role as individuators of intentional attitudes and as bearers of logical relations demonstrates that propositions are matter-independent, mind-independent “abstract objects,” ineliminable from ontology. Taking concrete sentences as the “tokens” and propositions as the “types,” the Platonic realist argues that propositions resist a standard nominalist treatment: a “proposition” cannot be simply the name of the set of all of the concrete sentences that express it. The Platonic realist appeals to our intuition that an unexpressed proposition is still a proposition. The fact that propositions can be translated into multiple languages is taken as a demonstration that propositions are not identical to their concrete sentence exemplars.
Wittgenstein proposes an alternative, behaviorist account of language. Wittgenstein’s famous dictum is that meaning is use. The “meaning” of a word, on this view, is whatever the user (speaker or writer) of the word accomplishes by the action of using the word. This alternative to traditional theories of meaning is often called “functional-role semantics.” Wittgenstein rejects the Platonic picture of concepts as essences: the property-in-itself, as distinct from any and all of the concrete exemplars of the property. Language use, he argues, is a type of behavior that reflects a “mode of life,” in the present case the mode of life of human beings. There are no essential meanings (there is no such thing as “meaning” in the traditional sense at all), just patterns of human behavior that can be roughly sorted out on the basis of resemblances and shared histories (these are language “games”). We may gather together statements about “justice” and note that they have similar contexts of use and similar implications for action, just as all of the members of a family can be linked through chains of family resemblance, but that is all. There can be no representation of justice because there is nothing to represent, just as a family of human beings has no “family avatar.” This argument generalizes to all words and their uses, not only those that we think of as naming “concepts.”
If this is right then we are entitled to nominalism about “propositions” after all. A proposition is nothing more than all of the sentence-tokenings of that particular string of symbols. In fact language loses its supposed “interior,” the meaning traditionally supposed to be within or behind the symbol, just as “mind” can now be seen as intelligible patterns of behavior of persons rather than as something “in the head.” Wittgenstein’s vision was to see everything as surface only, both in the case of mind and in the case of language. Psychological description and explanation, understood as an intersubjective discipline limited by the limits of language itself, was necessarily operational.
Now I can sketch out the first natural semantic, the one that replaces intentional predicates that attribute mental representations to persons.
Think of a tiger. Alright: now, how many stripes does your imaginary tiger have? Probably your “mental image” of a tiger turned out not to have a specific number of stripes. But a pictorial representation of a tiger would have to. Linguistic (formal) systems can include relevant information and need not contain irrelevant information, an obvious adaptive advantage over isomorphic (pictorial) representation. Formal representation was more congenial to the operationalists (such as computationalists) who wanted to develop functional models of cognition. Then in the late 1950s the linguist Noam Chomsky, critiquing behaviorism, made the enormously influential proposal that formal syntactical structure was “generative”: grammatical forms like “The_is_the_” allowed for multiple inputs and thus indefinitely many (linguistic) representations. Taken to its extreme this argument appears to show that it is necessary for a being to have a formal system for generating propositions to be capable of being in an intentional state at all. Finally the argument that it is propositions that have the property of meaning and that it is propositions that bear logical relations to each other made it seem that a linguistic theory of representation made progress on the mind-body problem.
On my view this is mistaken: the representational model of mind, by definition, locates “mental content” “in the head.” The basic metaphysical problem with the representational model has by now been made clear: “meaning,” what I have been calling the “intentional property” or the “semantic property,” is an irredeemably non-physical “property” that must be washed out of any naturalistic theory of mind. Once one recognizes that intentional predicates are predicated of whole persons – once one sees that positing mental representations necessarily commits the mereological fallacy – the matter is settled. However there is a tight network of arguments and assumptions about intentional states as “propositional attitudes” that will have to be disentangled to the satisfaction of readers who are disposed to defend representations.
The defender of propositional attitudes will start by pointing out that intentional states can only be individuated by virtue of their respective contents. What makes Belief X different from Belief Y is that X is about Paris and Y is about fish. This looks like a block to reduction: to correlate electrochemical activity in the brain, say, with Belief X, we must already be able to specify which belief Belief X is. We don’t have any way of getting from no-content to content (from the non-mental to the mental). This motivates the problem of mental causation: it appears that the content (meaning) of the proposition is what plays the causal role in the production of behavior: when told to proceed to the capital of France he went to Paris because he believed that “Paris is the capital of France.” All the explanation in physical (neurophysiological) terms one could possibly make wouldn’t be explanatory if it didn’t at some point reveal the meaning that is expressed in the proposition, and it doesn’t: “He believes that Paris is the capital of France” is not shorthand for a causal chain of neurophysiological processes.
Donald Davidson famously pointed out a further problem for the development of “psychophysical laws” (as he called them), laws that systematically identified brain processes with particular instances of intentional thought: no one propositional attitude could ever suffice as the discrete cause of a behavior because the causal implication that the propositional attitude has for the acting subject necessarily emerges from the logical relations that that “attitude” has with all of the other intentional states of the subject. Davidson’s phrase for this was “meaning holism,” the view that meaning (in the sense of explanatory psychological predication) is a property of networks of propositional attitudes, not of individual ones. There is not an assortment of individual intentional states in a person’s mind such that one or another might be the proximate cause of behavior; each person has one “intentional state,” the sum of the logically interrelated network of propositional attitudes.
Propositions are the bearers of logical relations with each other. Physical objects and processes, the argument goes, have no logical relations with each other. To believe that the drinking fountain is down the hall is to have the attitude towards the proposition “The drinking fountain is down the hall” that it is true, and to have a desire for water is to have the attitude towards “I have water” that one wants to make that proposition true. The explanatory utility of the intentional predicates – in this case the ability to make an inference from their coincidence that, all other things being equal, the subject will form an intention to walk to the fountain (that is, to make the proposition “I have walked to the fountain” true) – depends on the meaning of the propositions.
Of course making such an inference from the logical relationship between the two propositions also requires that we make a rationality assumption: we must assume about the subject that he, too, will appreciate these logical relations. That is part of the metaphysical “problem of rationality.” I cannot pretend that the distinction between the problem of rationality and the problem of representation is entirely clear-cut, but at this point I need only present a semantic for intentional predicates that locates logical relations out in the world rather than in the head; that will suffice to defeat the argument that mental content is necessary to explain the causal role of intentional states. The further metaphysical problem about the supposed lack of any correlation between the (contingent) physical relationships between states and processes in the body and the (necessary) logical relationships between propositions is dealt with in the discussion of the problem of rationality that is the second half of this chapter.
The terminal station for the line of argument that meaning is an indispensable property (and thus that representations are an ineliminable feature) of intentional explanation is Platonic realism about propositions. On this view, their role as individuators of intentional attitudes and as bearers of logical relations demonstrates that propositions are matter-independent, mind-independent “abstract objects,” ineliminable from ontology. Taking concrete sentences as the “tokens” and propositions as the “types,” the Platonic realist argues that propositions resist a standard nominalist treatment: a “proposition” cannot be simply the name of the set of all of the concrete sentences that express it. The Platonic realist appeals to our intuition that an unexpressed proposition is still a proposition. The fact that propositions can be translated into multiple languages is taken as a demonstration that propositions are not identical to their concrete sentence exemplars.
Wittgenstein proposes an alternative, behaviorist account of language. Wittgenstein’s famous dictum is that meaning is use. The “meaning” of a word, on this view, is whatever the user (speaker or writer) of the word accomplishes by the action of using the word. This alternative to traditional theories of meaning is often called “functional-role semantics.” Wittgenstein rejects the Platonic picture of concepts as essences: the property-in-itself, as distinct from any and all of the concrete exemplars of the property. Language use, he argues, is a type of behavior that reflects a “mode of life,” in the present case the mode of life of human beings. There are no essential meanings (there is no such thing as “meaning” in the traditional sense at all), just patterns of human behavior that can be roughly sorted out on the basis of resemblances and shared histories (these are language “games”). We may gather together statements about “justice” and note that they have similar contexts of use and similar implications for action, just as all of the members of a family can be linked through chains of family resemblance, but that is all. There can be no representation of justice because there is nothing to represent, just as a family of human beings has no “family avatar.” This argument generalizes to all words and their uses, not only those that we think of as naming “concepts.”
If this is right then we are entitled to nominalism about “propositions” after all. A proposition is nothing more than all of the sentence-tokenings of that particular string of symbols. In fact language loses its supposed “interior,” the meaning traditionally supposed to be within or behind the symbol, just as “mind” can now be seen as intelligible patterns of behavior of persons rather than as something “in the head.” Wittgenstein’s vision was to see everything as surface only, both in the case of mind and in the case of language. Psychological description and explanation, understood as an intersubjective discipline limited by the limits of language itself, was necessarily operational.
Now I can sketch out the first natural semantic, the one that replaces intentional predicates that attribute mental representations to persons.
Sunday, December 12, 2010
The mereological fallacy
Stomachs don’t eat lunch. Eating lunch is something that a whole, embodied person does. We understand the role that stomachs play in the lunch-eating process; we appreciate that people can’t eat lunch without them. Brains don’t think. They don’t learn, imagine, solve problems, calculate, dream, remember, hallucinate or perceive. To think that they do is to commit the same fallacy as someone who thought that people can eat lunch because they have little people inside them (stomachs) that eat lunch. This is the mereological fallacy: the fallacy of confusing the part with the whole (or of confusing the function of the part with the telos, or aim, of the whole, as Aristotle, who once again beat us to the crux of the problem, would say).
Nor is the homunculus a useful explanatory device in either case. When I am asked how we might explain the workings of the mind without recourse to mental representations, the reply is that we fail to explain anything at all about the workings of the mind with them. “Remembering my mother’s face is achieved by inspecting a representation of her face in my mind.” This is explanatorily vacuous. And if reference to representations does nothing to explain dreaming, imagining and remembering, it is particularly egregious when mental content is appealed to for an explanation of perception itself, the original “Cartesian” mistake from which all of the other problems derive. A person is constantly developing and revising an idea of his or her world; you can call it a “picture” if you like (a “worldview”), but that is figurative language. A person does not have a picture inside his or her body. Brains don’t form ideas about the world. That’s the kind of thing people do.
This original Cartesian error continues to infest contemporary cognitive science. When the brain areas in the left hemisphere correlated with understanding speech light up and one says, “This is where speech comprehension is occurring,” the mereological fallacy is alive and well. Speech comprehension is not something that occurs inside the body. Persons comprehend speech, and they do it out in the “external” world (the only world there is). Positing representations that exist inside the body is an instance of the mereological fallacy, and it is so necessarily, by virtue of the communicative element that is part of the definition of “representation,” “symbol” etc. Neither any part of the brain nor the brain or nervous system considered as a whole interprets anything. The key to a natural semantic of intentional predicates is the realization that they are predicated of persons, whole embodied beings functioning in relation to a larger environment.
This realization may also be momentous for brain science. Go to the medical school bookstore, find the neurophysiology textbooks and spend a few minutes perusing them. Within the first minutes you will find references to the movement of information (for example by the spinal column), maps (for example on the surface of the cortex), and information processing (for example by the retina and in the visual cortex) and so on. (Actually I suspect that brain scientists are relatively sophisticated in their understanding of the figurative nature of this kind of language compared to workers in other areas of cognitive science; the point is just that representational talk does indeed saturate the professional literature through and through.) But if brain function does not involve representations then we don’t know what brains actually do, and furthermore the representational paradigm is in the way of finding out: the whole project needs to be reconceived. If there is any possibility that this is true at all then these arguments need to be elaborated out as far as they can be.
Taking the argument from the mereological fallacy seriously also draws our attention to the nature of persons. It follows from what has been said that the definition of “person” will be operational. Operational definitions have an inevitably circular character: a person is any being that takes intentional predicates. One might object that we routinely make intentional predications of, say, cars (“My car doesn’t like the cold”), but as Daniel Dennett famously pointed out this objection doesn’t go through when we know that there is a “machine-language” explanation of the object’s behavior: I may not know enough about batteries, starters and so forth to explain my car’s failure to start in the cold, but someone else does, and that’s all I need to know to know that my “intentional” explanation is strictly figurative. But then don’t persons also have machine-language explanations?
No: my car won’t start because the battery is frozen. The mechanic does not commit any fallacy when he says, “Your battery’s the problem.” The part is not confused with the whole. It’s really just the battery. Now suppose that you are driving down the freeway searching for the right exit. You remember that there are some fast-food restaurants there, and you have a feeling that one always thinks that they have gone too far in these situations, so you press on. However you manage to do this, it is no explanation to say that you have done it because your brain remembered the fast-food restaurants, and has beliefs about the phenomenology of being lost on the freeway, and decided to keep going and so forth. That’s like saying that you had lunch because your stomach had lunch.
In fact there is not a machine-language explanation of personhood. Kant, writing in the late 1700s, is fastidious about referring to “all rational beings,” he never says “human beings”; he understands that when we are discussing the property of personhood we are discussing (what I would call) a supervenient functional property (Kant would call personhood “transcendental”), not a contingent physical property. Unfortunately Kant is programmatically intent on limiting the scope of materialism in the first place and thus fails to develop non-reductive materialism. But he understood that the mental cannot be one of the ingredients in the recipe for the mental.
Nor is the homunculus a useful explanatory device in either case. When I am asked how we might explain the workings of the mind without recourse to mental representations, the reply is that we fail to explain anything at all about the workings of the mind with them. “Remembering my mother’s face is achieved by inspecting a representation of her face in my mind.” This is explanatorily vacuous. And if reference to representations does nothing to explain dreaming, imagining and remembering, it is particularly egregious when mental content is appealed to for an explanation of perception itself, the original “Cartesian” mistake from which all of the other problems derive. A person is constantly developing and revising an idea of his or her world; you can call it a “picture” if you like (a “worldview”), but that is figurative language. A person does not have a picture inside his or her body. Brains don’t form ideas about the world. That’s the kind of thing people do.
This original Cartesian error continues to infest contemporary cognitive science. When the brain areas in the left hemisphere correlated with understanding speech light up and one says, “This is where speech comprehension is occurring,” the mereological fallacy is alive and well. Speech comprehension is not something that occurs inside the body. Persons comprehend speech, and they do it out in the “external” world (the only world there is). Positing representations that exist inside the body is an instance of the mereological fallacy, and it is so necessarily, by virtue of the communicative element that is part of the definition of “representation,” “symbol” etc. Neither any part of the brain nor the brain or nervous system considered as a whole interprets anything. The key to a natural semantic of intentional predicates is the realization that they are predicated of persons, whole embodied beings functioning in relation to a larger environment.
This realization may also be momentous for brain science. Go to the medical school bookstore, find the neurophysiology textbooks and spend a few minutes perusing them. Within the first minutes you will find references to the movement of information (for example by the spinal column), maps (for example on the surface of the cortex), and information processing (for example by the retina and in the visual cortex) and so on. (Actually I suspect that brain scientists are relatively sophisticated in their understanding of the figurative nature of this kind of language compared to workers in other areas of cognitive science; the point is just that representational talk does indeed saturate the professional literature through and through.) But if brain function does not involve representations then we don’t know what brains actually do, and furthermore the representational paradigm is in the way of finding out: the whole project needs to be reconceived. If there is any possibility that this is true at all then these arguments need to be elaborated out as far as they can be.
Taking the argument from the mereological fallacy seriously also draws our attention to the nature of persons. It follows from what has been said that the definition of “person” will be operational. Operational definitions have an inevitably circular character: a person is any being that takes intentional predicates. One might object that we routinely make intentional predications of, say, cars (“My car doesn’t like the cold”), but as Daniel Dennett famously pointed out this objection doesn’t go through when we know that there is a “machine-language” explanation of the object’s behavior: I may not know enough about batteries, starters and so forth to explain my car’s failure to start in the cold, but someone else does, and that’s all I need to know to know that my “intentional” explanation is strictly figurative. But then don’t persons also have machine-language explanations?
No: my car won’t start because the battery is frozen. The mechanic does not commit any fallacy when he says, “Your battery’s the problem.” The part is not confused with the whole. It’s really just the battery. Now suppose that you are driving down the freeway searching for the right exit. You remember that there are some fast-food restaurants there, and you have a feeling that one always thinks that they have gone too far in these situations, so you press on. However you manage to do this, it is no explanation to say that you have done it because your brain remembered the fast-food restaurants, and has beliefs about the phenomenology of being lost on the freeway, and decided to keep going and so forth. That’s like saying that you had lunch because your stomach had lunch.
In fact there is not a machine-language explanation of personhood. Kant, writing in the late 1700s, is fastidious about referring to “all rational beings,” he never says “human beings”; he understands that when we are discussing the property of personhood we are discussing (what I would call) a supervenient functional property (Kant would call personhood “transcendental”), not a contingent physical property. Unfortunately Kant is programmatically intent on limiting the scope of materialism in the first place and thus fails to develop non-reductive materialism. But he understood that the mental cannot be one of the ingredients in the recipe for the mental.
Sunday, December 5, 2010
The spectrum of materialisms
By the 1950s a burgeoning physicalist ideology led philosophers to go beyond the methodological scientism of behaviorism and try to develop an explicitly materialist theory of mind. (I prefer the term “physicalist” to “materialist,” but in this part of the literature the term “materialist” is almost always used so I will follow popular usage.) This movement had everything to do with the intense flowering of technology in this period. For example, electrodes fine enough to penetrate neural axons without destroying them allowed for the measurement and tracking of electrochemical events in live brains. This immediately led to demonstrations of correlations between specific areas of the brain and specific mental abilities and processes. It seemed to be common sense that the materialist program would essentially consist of identifying mental states with physical states (of the brain): “identity theory.”
This is reductive materialism, the view that the descriptive and explanatory language of psychology can be reduced (or translated, or analyzed) into the language of neurophysiology (or maybe just physiology: the argument here does not depend on anyone holding that the brain is the only part of the body that instantiates mental states, although many have held that position. I will continue to use the word “brain” for the sake of exposition). The identity theorists couldn’t say that brain states caused mental states or somehow underlay mental states because that would still distinguish the mental from the physical. The theory had to say that what mental states really were were physical states. Reductive materialism/identity theory is common sense (albeit incorrect) materialism, and stands at the center of the materialist spectrum, with a wing on either side (I have heard philosophers refer to the two wings as the “right wing” and the “left wing,” but I neither understand that categorization nor see how it is useful. One can make arguments to the effect that either wing is the “right” or the “left” one).
The essential metaphysical problem for reductive materialism is not, in retrospect, hard to see: intentional states are multiply realizable, supervenient on their physical exemplars. (A crucial point for the larger argument of this book is that this is a problem for intentional states specifically; the following arguments do not go through for consciousness.) Since the extension of the set of potential subjects of intentional predicates is not fixable with any physical specifications, reductive materialism is “chauvinistic,” as it means to identify a given intentional state (the belief that there are fish in the barrel, say) with some specific human brain state.
Functionalism is a response to this metaphysical problem. Functionalism stresses the type/token distinction: while token-to-token identity is possible, type-to-type identity is not. That is, every token (every actual instance) of an intentional state is instantiated by some specific physical state (assuming physicalism, which technically speaking functionalism doesn’t have to do). Functionalism plus physicalism is non-reductive materialism, one of the wings of the materialist spectrum. Functionalism abstracts away from the token physical instantiations by replacing physical descriptions with functional descriptions. (Aristotle, trying to block the reductive materialism of Democritus, located this block at the level of biological description rather than psychological description, and his ideas continue to be of the utmost importance for philosophy of mind to this day.) A mature functionalist psychology, free of references to the human body, would amount to a generic set of performance specifications for an intelligent being; in this way functionalism (that is cognitive psychology, computer science, logic, robotics and other functional-descriptive pursuits) provides a “top-down” model for backwards-engineering the human nervous system itself, tunneling towards a link with the “bottom-up” (or “wetware”) researches of neurophysiology, evolutionary biology, physical anthropology etc.
Although functionalism is of great use as a heuristic it is not clear that non-reductive materialism, considered as a theory of mind, succeeds in addressing the problem of mental representation, let alone in resolving it. On the non-reductive materialist theory a given mental state, for example the belief that the fish are in the barrel, is defined as any physical state X that plays the appropriate causal role in the production of behavior, as in “Flipper is trying to upend the barrel because Flipper desires fish and X.” This formula usefully allows for the possibility that the relevant function might be achieved without the use of representations, but it doesn’t rule out the use (the existence) of representations. In failing to resolve the problem of the semantic property (or, for that matter, the problem of rationality) in favor of a physicalist semantic functionalism is something less than a full-blown “theory of mind.”
However, functionalism, or I should say the recognition of the problem of multiple realizability that motivates functionalism, does express the central problem for the other wing of the materialist spectrum. On the other side of reductive materialism from non-reductive materialism is eliminative materialism. Eliminative materialism emphasizes the possibility that a mature naturalized psychology need not be expected to provide a physical semantic of intentional states. The eliminativist argues that it is possible that the intentional vocabulary might instead be replaced altogether with a new, physical vocabulary. After all, while Zeus’s thunderbolts have been inter-theoretically reduced to electrical discharges, the heavenly spheres are not identified with anything in our contemporary astronomy. The history of science provides many examples of both reduction and elimination. The research program of cognitive science cannot just assume that the categories of traditional intentional psychology (“folk psychology”) carve the psychological world at its joints. Thus eliminativists propose the “Theory Theory,” the idea that the intentional vocabulary amounts to a particular theory about the mind, and that it is an old vocabulary that might be eliminated rather than reduced.
My uncle Ed, a devotee of corny jokes, likes to tell the one about the tourist who pulls over to ask the local how to get to Hoboken (all of Ed’s jokes are set in his beloved New Jersey). Thinking it over, the local finally says, “You can’t get there from here.” Eliminativism about the intentional vocabulary has a you-can’t-get-there-from-here problem. To say that the intentional vocabulary is subject to elimination is to say that we might talk another way. But as things stand, it can only be said of the eliminativist that they desire to show that we need not necessarily speak of desires, that they believe that “beliefs” are part of an eliminable vocabulary, and so on. For a time I thought that this merely indicated that eliminativism, like functionalism, was something less than a fully realized theory of mind, but the problem is more serious than that and we can see why by considering once again the problem of multiple realizability.
Socrates asks the young men to define justice. They try to explain the property by giving examples of just and unjust actions and of situations where justice does or does not obtain. Socrates rejects this method: examples of justice, he argues, can never be the definition of justice. Plato thinks that supervenient properties are transcendental properties. They do not emerge, somehow, from the contingent physical world (like Aristotle Plato is opposed to reductive materialism). Rather the physical world takes on intelligible form through participation, somehow, with the transcendental (I will return to Plato’s metaphysics in the discussion of the problem of rationality below). The supervenient nature of these properties demonstrates, to Plato’s mind, that they do not come to be and pass away along with their various, impermanent, physical instantiations. Plato was the first philosopher to recognize that intentional predicates supervene on multiple physical things; ultimately his argument is that souls are immortal because properties are immortal.
“Or again, if he (Anaxagoras) tried to account in the same way for my conversing with you, adducing causes such as sound and air and hearing and a thousand others, and never troubled to mention the real reasons, which are that since Athens has thought it better to condemn me, therefore I for my part have thought it better to sit here…these sinews and bones would have been in the neighborhood of Megara or Boeotia long ago” (Phaedo 98d).
Wittgenstein rejected Plato’s search for transcendent essences, but not the ineliminable nature of the intentional predicates. While Wittgenstein thinks that individual, concrete instances of uses of a word (that is, the set of actual tokens of the word) are all there is to the “meaning” of the word (“meaning” is simply use), he identifies psychological predicates with a form of life: “To imagine a language is to imagine a life-form.” Like Aristotle Wittgenstein identifies psyche with life itself, not with the “mind” (towards which he has a Humean skepticism).
In sum, what the multiple-realizability (the supervenient nature) of intentional predicates demonstrates is that they cannot be replaced with some other way of talking. We can no more dispense with “belief” or “desire” than we can with “beauty” or “justice.” These words simply do not refer to any finite, specifiable set of physical characteristics of any finite, specifiable set of physical things. At a minimum this strongly suggests that the intentional vocabulary is ineliminable. (Again, none of this holds for phenomenal predicates. They require a completely different treatment that they will get in Chapter Three.) It follows from this that intentional predicates do not refer to any “internal” states at all, which is the key to developing a natural semantic for them.
First, though, let’s finish the discussion of eliminative materialism. There are two types of eliminativism. The first is the kind I have been discussing, the kind usually associated with the name: eliminativism about intentional predicates. But we have seen that physical analysis of nervous systems has no greater prospect of eliminating intentional predicates than physical analysis of works of art does of eliminating aesthetic predicates. What physicalism does both promise and require is the elimination of any reference to clearly non-physical properties (supervenient properties are not “clearly non-physical”; what their metaphysical status is continues to be the question that we are asking).
No, the clearly non-physical property in which intentional predication allegedly involves us has been clear all along: the semantic property. The only eliminativism worthy of that mouthful of a name is content eliminativism. As Jerry Fodor has written, “I suppose that sooner or later the physicists will complete the catalogue they’ve been compiling of the ultimate and irreducible properties of things. When they do, the likes of spin, charm, and charge will perhaps appear on their list. But aboutness surely won’t; intentionality simply doesn’t go that deep.” Representation is the only game that we know is not in town (although some further discussion of Fodor, one of the most important contemporary writers on this topic and a champion of the representational theory, will be necessary below). How ironic, then, that some of the philosophers most closely associated with “eliminative materialism” are in fact very much wedded to the representational paradigm when mental representation is the one and only thing that physicalism has to eliminate in order to be physicalism at all.
This is reductive materialism, the view that the descriptive and explanatory language of psychology can be reduced (or translated, or analyzed) into the language of neurophysiology (or maybe just physiology: the argument here does not depend on anyone holding that the brain is the only part of the body that instantiates mental states, although many have held that position. I will continue to use the word “brain” for the sake of exposition). The identity theorists couldn’t say that brain states caused mental states or somehow underlay mental states because that would still distinguish the mental from the physical. The theory had to say that what mental states really were were physical states. Reductive materialism/identity theory is common sense (albeit incorrect) materialism, and stands at the center of the materialist spectrum, with a wing on either side (I have heard philosophers refer to the two wings as the “right wing” and the “left wing,” but I neither understand that categorization nor see how it is useful. One can make arguments to the effect that either wing is the “right” or the “left” one).
The essential metaphysical problem for reductive materialism is not, in retrospect, hard to see: intentional states are multiply realizable, supervenient on their physical exemplars. (A crucial point for the larger argument of this book is that this is a problem for intentional states specifically; the following arguments do not go through for consciousness.) Since the extension of the set of potential subjects of intentional predicates is not fixable with any physical specifications, reductive materialism is “chauvinistic,” as it means to identify a given intentional state (the belief that there are fish in the barrel, say) with some specific human brain state.
Functionalism is a response to this metaphysical problem. Functionalism stresses the type/token distinction: while token-to-token identity is possible, type-to-type identity is not. That is, every token (every actual instance) of an intentional state is instantiated by some specific physical state (assuming physicalism, which technically speaking functionalism doesn’t have to do). Functionalism plus physicalism is non-reductive materialism, one of the wings of the materialist spectrum. Functionalism abstracts away from the token physical instantiations by replacing physical descriptions with functional descriptions. (Aristotle, trying to block the reductive materialism of Democritus, located this block at the level of biological description rather than psychological description, and his ideas continue to be of the utmost importance for philosophy of mind to this day.) A mature functionalist psychology, free of references to the human body, would amount to a generic set of performance specifications for an intelligent being; in this way functionalism (that is cognitive psychology, computer science, logic, robotics and other functional-descriptive pursuits) provides a “top-down” model for backwards-engineering the human nervous system itself, tunneling towards a link with the “bottom-up” (or “wetware”) researches of neurophysiology, evolutionary biology, physical anthropology etc.
Although functionalism is of great use as a heuristic it is not clear that non-reductive materialism, considered as a theory of mind, succeeds in addressing the problem of mental representation, let alone in resolving it. On the non-reductive materialist theory a given mental state, for example the belief that the fish are in the barrel, is defined as any physical state X that plays the appropriate causal role in the production of behavior, as in “Flipper is trying to upend the barrel because Flipper desires fish and X.” This formula usefully allows for the possibility that the relevant function might be achieved without the use of representations, but it doesn’t rule out the use (the existence) of representations. In failing to resolve the problem of the semantic property (or, for that matter, the problem of rationality) in favor of a physicalist semantic functionalism is something less than a full-blown “theory of mind.”
However, functionalism, or I should say the recognition of the problem of multiple realizability that motivates functionalism, does express the central problem for the other wing of the materialist spectrum. On the other side of reductive materialism from non-reductive materialism is eliminative materialism. Eliminative materialism emphasizes the possibility that a mature naturalized psychology need not be expected to provide a physical semantic of intentional states. The eliminativist argues that it is possible that the intentional vocabulary might instead be replaced altogether with a new, physical vocabulary. After all, while Zeus’s thunderbolts have been inter-theoretically reduced to electrical discharges, the heavenly spheres are not identified with anything in our contemporary astronomy. The history of science provides many examples of both reduction and elimination. The research program of cognitive science cannot just assume that the categories of traditional intentional psychology (“folk psychology”) carve the psychological world at its joints. Thus eliminativists propose the “Theory Theory,” the idea that the intentional vocabulary amounts to a particular theory about the mind, and that it is an old vocabulary that might be eliminated rather than reduced.
My uncle Ed, a devotee of corny jokes, likes to tell the one about the tourist who pulls over to ask the local how to get to Hoboken (all of Ed’s jokes are set in his beloved New Jersey). Thinking it over, the local finally says, “You can’t get there from here.” Eliminativism about the intentional vocabulary has a you-can’t-get-there-from-here problem. To say that the intentional vocabulary is subject to elimination is to say that we might talk another way. But as things stand, it can only be said of the eliminativist that they desire to show that we need not necessarily speak of desires, that they believe that “beliefs” are part of an eliminable vocabulary, and so on. For a time I thought that this merely indicated that eliminativism, like functionalism, was something less than a fully realized theory of mind, but the problem is more serious than that and we can see why by considering once again the problem of multiple realizability.
Socrates asks the young men to define justice. They try to explain the property by giving examples of just and unjust actions and of situations where justice does or does not obtain. Socrates rejects this method: examples of justice, he argues, can never be the definition of justice. Plato thinks that supervenient properties are transcendental properties. They do not emerge, somehow, from the contingent physical world (like Aristotle Plato is opposed to reductive materialism). Rather the physical world takes on intelligible form through participation, somehow, with the transcendental (I will return to Plato’s metaphysics in the discussion of the problem of rationality below). The supervenient nature of these properties demonstrates, to Plato’s mind, that they do not come to be and pass away along with their various, impermanent, physical instantiations. Plato was the first philosopher to recognize that intentional predicates supervene on multiple physical things; ultimately his argument is that souls are immortal because properties are immortal.
“Or again, if he (Anaxagoras) tried to account in the same way for my conversing with you, adducing causes such as sound and air and hearing and a thousand others, and never troubled to mention the real reasons, which are that since Athens has thought it better to condemn me, therefore I for my part have thought it better to sit here…these sinews and bones would have been in the neighborhood of Megara or Boeotia long ago” (Phaedo 98d).
Wittgenstein rejected Plato’s search for transcendent essences, but not the ineliminable nature of the intentional predicates. While Wittgenstein thinks that individual, concrete instances of uses of a word (that is, the set of actual tokens of the word) are all there is to the “meaning” of the word (“meaning” is simply use), he identifies psychological predicates with a form of life: “To imagine a language is to imagine a life-form.” Like Aristotle Wittgenstein identifies psyche with life itself, not with the “mind” (towards which he has a Humean skepticism).
In sum, what the multiple-realizability (the supervenient nature) of intentional predicates demonstrates is that they cannot be replaced with some other way of talking. We can no more dispense with “belief” or “desire” than we can with “beauty” or “justice.” These words simply do not refer to any finite, specifiable set of physical characteristics of any finite, specifiable set of physical things. At a minimum this strongly suggests that the intentional vocabulary is ineliminable. (Again, none of this holds for phenomenal predicates. They require a completely different treatment that they will get in Chapter Three.) It follows from this that intentional predicates do not refer to any “internal” states at all, which is the key to developing a natural semantic for them.
First, though, let’s finish the discussion of eliminative materialism. There are two types of eliminativism. The first is the kind I have been discussing, the kind usually associated with the name: eliminativism about intentional predicates. But we have seen that physical analysis of nervous systems has no greater prospect of eliminating intentional predicates than physical analysis of works of art does of eliminating aesthetic predicates. What physicalism does both promise and require is the elimination of any reference to clearly non-physical properties (supervenient properties are not “clearly non-physical”; what their metaphysical status is continues to be the question that we are asking).
No, the clearly non-physical property in which intentional predication allegedly involves us has been clear all along: the semantic property. The only eliminativism worthy of that mouthful of a name is content eliminativism. As Jerry Fodor has written, “I suppose that sooner or later the physicists will complete the catalogue they’ve been compiling of the ultimate and irreducible properties of things. When they do, the likes of spin, charm, and charge will perhaps appear on their list. But aboutness surely won’t; intentionality simply doesn’t go that deep.” Representation is the only game that we know is not in town (although some further discussion of Fodor, one of the most important contemporary writers on this topic and a champion of the representational theory, will be necessary below). How ironic, then, that some of the philosophers most closely associated with “eliminative materialism” are in fact very much wedded to the representational paradigm when mental representation is the one and only thing that physicalism has to eliminate in order to be physicalism at all.
Subscribe to:
Posts (Atom)