I'm traveling with my family, going out on a boat on the Delaware River tonight with all of my mother's sobrinos, but can't resist a nice basic question.
Billie Pritchett asks, "Do you think intentional states are natural kinds?" I think that an underlying kind of big question is what "natural kinds" are, I for one don't know, Aristotle thought that individual things were primary being and that natural kinds were ineliminable categories of those, species being his paradigm example. Plato thought that logical relations, such as are described in mathematical proofs, were "universals," which were the-most-interesting-being, anyway, so far as he was concerned. I think that this metaphysical discussion lies at the heart of functionalism and in general for the philosophy of mind. In fact my interest in metaphysics grew out of my interest in the metaphysics of mind. If, in addition to the existential fact that something exists rather than nothing, it is a different (contingent) fact that the universe is formally organized, if there are two existential questions instead of one, then "materialism" is false. Progress is to see that on this view the mind/body problem is not a particular problem in metaphysics, rather it is just an instance of the more general metaphysical problem. That looks like a little bit of resolution for the mind/body problem, at least as to the metaphysics of intentionality.
So getting more focused on your question, it is my view (the short answer is) that intentional states are ineliminable. But what are they? (You see that metaphysics is a matter of taking your assertions seriously.) My (admittedly circular) claim is that something is a person to the extent that it takes intentional predicates (a jargon way of saying that we use belief/desire psychology to explain and predict its' behavior). It looks to me that humans, some (actually I think many) non-human animals, possible aliens and possible artifacts can all be (equally) persons, so I conclude that intentional predicates aren't tied to any specific matter or even any specific kind of organization of matter (note that the problem of phenomenal properties requires a whole separate treatment here).
I think that intentional descriptions are descriptions of relations between the person and the environment. This is the connection between my views and behaviorism, also "wide-content" (externalist) accounts, and of course my interest in Wittgenstein. I don't know if relations are properties ("relational properties"), maybe not (John Heil says no). Certainly the whole discussion of "properties" is just as inchoate as the discussion of "natural kinds."
Tuesday, July 22, 2008
Thursday, July 10, 2008
Behaviorism and the Mereological Fallacy
Gerardo Primero is interested in the question of whether or not intentional predicates applied to brains are meaningful. His idea is that there is a difference between saying something that is meaningful, but wrong (his analysis), and saying that such predications are nonsense (devoid of meaning). His take on Wittgensteinian analyses (such as that of Bennett and Hacker in Philosophical Foundations of Neuroscience) is that they claim that e.g. saying that there are images or sentences or other forms of representation "in the brain" is meaningless. So the question is whether or not Gerardo has a good criticism of Bennett and Hacker in this regard.
Note that this discussion is to a large extent a version of the oldest and biggest problem for behaviorism, that is that we have strong intuitions that phenomenal experience is distinct from outward behavior, as in the case of the man who pretends he is in pain when he is not. Surely this exposes behaviorism as incomplete at best, if the behaviorist claims that "Sally likes chocolate" does not entail a reference to the quality of her gustatory sensations when she puts the chocolate in her mouth? Daniel Dennett is trying to untangle this in The Intentional Stance (unsuccessfully, I think), and David Chalmers in The Conscious Mind takes the alleged possibility of "zombies," operational duplicates of conscious persons who have no conscious experience, as grounds for metaphysical dualism about "phenomenal properties" (spuriously, I think).
But today I want to stick to Bennett and Hacker's theme of the "mereological fallacy," the fallacy of attributing to parts properties had only by the whole. I'm not myself too wedded to the idea that B & H hold that committing the fallacy is equivalent to "nonsense," maybe they just think it's unhelpful, or vacuous; but I want to write more generally today.
Gerardo writes, "People do ascribe mental terms to things that are not persons (i.e. to corporations as in 'Microsoft believes that...,' to machines and robots as in 'it sees and recognizes visual patterns,' to brains and brain parts, to animals), and people usually understand each other..." The definition of a "person," it seems to me (and I am making no effort to defend or even necessarily represent B & H here) just is "any being that takes intentional predicates," I take that to be the idea of operationalist approaches.
Viewed in this light, Gerardo's list of examples turns out to be quite heterogenous. Long-time readers of this blog (are you the one?) know that I take animals to be paradigmatic examples of persons: dogs, say, believe and desire and hope and fear etc., and the semantics of those predicates are the same, on my view, when applied to humans and to dogs and many other non-human animals (contra Davidson, by the way). Similarly with possible conscious androids: as a materialist, since I am committed to the view that human consciousness is a feature of physical properties that humans possess, ipso facto an artifact that had those properties would be conscious (but computers ain't it; the relevant properties are not merely computational - John Searle gets this right). Corporations are a stranger example (remember Ned Block's Chinese nation example), and I'm not sure what I think about that: my intuition is pretty strong that animals and possible conscious artifacts are conscious as bodies (I'm pretty sure I have a physical criterion of personal identity), and that a "being" composed of unconnected parts maybe could not have consciousness in this (admittedly vague) sense. Still, a corporation, or nation, or team, is after all a kind of body, so there is at least room for discussion there. So "brains and brain parts" seem to be the odd man out on the list.
Years ago when I first heard about functionalism my first naive response was "But persons don't have any function!" Maybe that's right: the person is an embodied being with preferences and aversions. (Are the values a hard part for the possible conscious artifact? Maybe yes.) I'm thinking about the difference between the telos of the car battery (starting the car) and the telos of the car (driving people around). You might say that all there is is just nested functionality, all the way up and all the way down (I read William Lycan this way). If that's how you see it, then maybe car batteries and brains have as much claim to personhood as cars and humans. Dennett says that a thermostat comes under intentional description: it believes that it is presently too cold, or that it is not. On this version of operationalism the only problem with saying, "My car battery doesn't like the cold weather" as a further explanation of my claim that "My car doesn't like the cold weather" is that there is some (informal) threshold of obtuseness when it's just not necessary anymore to replace physical predicates ("It's frozen") with intentional ones ("It's unhappy"). And maybe that's right.
What I take B & H to be claiming is that there are no neural correlates of intentional states. There is not some brain state that embodies my belief that Paris is capital of France, or my desire for some chocolate. That is the sense of the mereological fallacy: that it is a mistake (a mistaken research paradigm) to search for neural correlates of intentional states. This goes to my problem with representational models of mind. It doesn't help to explain how it is that I believe that Paris is the capital of France to claim that there is some formal token of the proposition "Paris is the capital of France" inside my body somewhere. I don't think that intentional states are neural states at all. I think that they are states of embodied persons. What kind of "states"? (John Heil does good work on the metaphysics of "states," "properties," and so on.) Right now I'm thinking that "intentional states" are relations between persons and their environments (this is a type, I think, of externalism/"wide content").
Anyway I'm off to do the recycling and sign my daughter up for swimming lessons.
Note that this discussion is to a large extent a version of the oldest and biggest problem for behaviorism, that is that we have strong intuitions that phenomenal experience is distinct from outward behavior, as in the case of the man who pretends he is in pain when he is not. Surely this exposes behaviorism as incomplete at best, if the behaviorist claims that "Sally likes chocolate" does not entail a reference to the quality of her gustatory sensations when she puts the chocolate in her mouth? Daniel Dennett is trying to untangle this in The Intentional Stance (unsuccessfully, I think), and David Chalmers in The Conscious Mind takes the alleged possibility of "zombies," operational duplicates of conscious persons who have no conscious experience, as grounds for metaphysical dualism about "phenomenal properties" (spuriously, I think).
But today I want to stick to Bennett and Hacker's theme of the "mereological fallacy," the fallacy of attributing to parts properties had only by the whole. I'm not myself too wedded to the idea that B & H hold that committing the fallacy is equivalent to "nonsense," maybe they just think it's unhelpful, or vacuous; but I want to write more generally today.
Gerardo writes, "People do ascribe mental terms to things that are not persons (i.e. to corporations as in 'Microsoft believes that...,' to machines and robots as in 'it sees and recognizes visual patterns,' to brains and brain parts, to animals), and people usually understand each other..." The definition of a "person," it seems to me (and I am making no effort to defend or even necessarily represent B & H here) just is "any being that takes intentional predicates," I take that to be the idea of operationalist approaches.
Viewed in this light, Gerardo's list of examples turns out to be quite heterogenous. Long-time readers of this blog (are you the one?) know that I take animals to be paradigmatic examples of persons: dogs, say, believe and desire and hope and fear etc., and the semantics of those predicates are the same, on my view, when applied to humans and to dogs and many other non-human animals (contra Davidson, by the way). Similarly with possible conscious androids: as a materialist, since I am committed to the view that human consciousness is a feature of physical properties that humans possess, ipso facto an artifact that had those properties would be conscious (but computers ain't it; the relevant properties are not merely computational - John Searle gets this right). Corporations are a stranger example (remember Ned Block's Chinese nation example), and I'm not sure what I think about that: my intuition is pretty strong that animals and possible conscious artifacts are conscious as bodies (I'm pretty sure I have a physical criterion of personal identity), and that a "being" composed of unconnected parts maybe could not have consciousness in this (admittedly vague) sense. Still, a corporation, or nation, or team, is after all a kind of body, so there is at least room for discussion there. So "brains and brain parts" seem to be the odd man out on the list.
Years ago when I first heard about functionalism my first naive response was "But persons don't have any function!" Maybe that's right: the person is an embodied being with preferences and aversions. (Are the values a hard part for the possible conscious artifact? Maybe yes.) I'm thinking about the difference between the telos of the car battery (starting the car) and the telos of the car (driving people around). You might say that all there is is just nested functionality, all the way up and all the way down (I read William Lycan this way). If that's how you see it, then maybe car batteries and brains have as much claim to personhood as cars and humans. Dennett says that a thermostat comes under intentional description: it believes that it is presently too cold, or that it is not. On this version of operationalism the only problem with saying, "My car battery doesn't like the cold weather" as a further explanation of my claim that "My car doesn't like the cold weather" is that there is some (informal) threshold of obtuseness when it's just not necessary anymore to replace physical predicates ("It's frozen") with intentional ones ("It's unhappy"). And maybe that's right.
What I take B & H to be claiming is that there are no neural correlates of intentional states. There is not some brain state that embodies my belief that Paris is capital of France, or my desire for some chocolate. That is the sense of the mereological fallacy: that it is a mistake (a mistaken research paradigm) to search for neural correlates of intentional states. This goes to my problem with representational models of mind. It doesn't help to explain how it is that I believe that Paris is the capital of France to claim that there is some formal token of the proposition "Paris is the capital of France" inside my body somewhere. I don't think that intentional states are neural states at all. I think that they are states of embodied persons. What kind of "states"? (John Heil does good work on the metaphysics of "states," "properties," and so on.) Right now I'm thinking that "intentional states" are relations between persons and their environments (this is a type, I think, of externalism/"wide content").
Anyway I'm off to do the recycling and sign my daughter up for swimming lessons.
Wednesday, July 2, 2008
Anomalous Monism is Neither. Discuss Amongst Yourselves.
Kevin Vond left a comment on the last post (discussion with Gerardo Primero) and mentioned Donald Davidson's article "Mental Events," which got me thinking this morning. I think that Davidson is, in basic metaphysical terms, the very opposite of the sort of eliminativism that I am discussing: eliminativism about symbolic content playing a causal role in the functioning of the nervous system, on a reasonably well-naturalized model of nervous system function. And I think that Davidson is guilty of the mereological fallacy.
Davidson's view in "MEs" is that he can simultaneously hold that metaphysically speaking intentional states and causes just are identical with neural states and causes (or, intentional properties supervene on neural properties), and meaning holism, the view that parts of language have meaning (are interpretable)only within a larger context of an entire language and the web of intentional states that are also being attributed to a particular person. Thus the "anomalous" part is that there can be, according to this "anomalous monism," no "psychophysical laws," nomological rules for mapping back from the neural processes to the intentional processes.
Thus brain states, according to Davidson, just are intentional states under a different description (and I see where Kevin picks up on the Spinozistic side of this). This is precisely the view that Wittgenstein opposes. Davidson locates all of the causal power in the linguistic and logical relations between propositional attitudes (beliefs, desires, etc.). These attitudes are individuated in terms of their propositional content. This is sometimes called a "sentential" model of mental representation, involving as it does sentences, understood as tokens of propositions, in the head. It's more useful to call it a formal model: formal representation and supervenience on physical processes go together. This intentional realist camp includes Descartes, Kant, Chomsky, and Fodor as well as Davidson.
I think that this may be all wrong (I think that representational models of mind may be all wrong), on the basic grounds being discussed in the last post. Note also that elsewhere Davidson ("Thought and Talk") argues that non-linguistic animals can't have intentional states, because intentional states are propositional attitudes. Thus the subsequent interest in whether animals could learn grammar. This is Chomsky's view as well, at least the early Chomsky would argue that animals could not think (he's more liberal on that now). Of course that is backwards, thought precedes talk by a very long way. Understanding sea slugs is indeed a big help.
PS Kevin and Gerardo, "Discuss Amongst Yourselves" is a reference to a popular humor show in the US, just a joke!
(Also thanks and a tip o' the hat to Brood's Philosophy Power Blogroll for the shout-out.)
Davidson's view in "MEs" is that he can simultaneously hold that metaphysically speaking intentional states and causes just are identical with neural states and causes (or, intentional properties supervene on neural properties), and meaning holism, the view that parts of language have meaning (are interpretable)only within a larger context of an entire language and the web of intentional states that are also being attributed to a particular person. Thus the "anomalous" part is that there can be, according to this "anomalous monism," no "psychophysical laws," nomological rules for mapping back from the neural processes to the intentional processes.
Thus brain states, according to Davidson, just are intentional states under a different description (and I see where Kevin picks up on the Spinozistic side of this). This is precisely the view that Wittgenstein opposes. Davidson locates all of the causal power in the linguistic and logical relations between propositional attitudes (beliefs, desires, etc.). These attitudes are individuated in terms of their propositional content. This is sometimes called a "sentential" model of mental representation, involving as it does sentences, understood as tokens of propositions, in the head. It's more useful to call it a formal model: formal representation and supervenience on physical processes go together. This intentional realist camp includes Descartes, Kant, Chomsky, and Fodor as well as Davidson.
I think that this may be all wrong (I think that representational models of mind may be all wrong), on the basic grounds being discussed in the last post. Note also that elsewhere Davidson ("Thought and Talk") argues that non-linguistic animals can't have intentional states, because intentional states are propositional attitudes. Thus the subsequent interest in whether animals could learn grammar. This is Chomsky's view as well, at least the early Chomsky would argue that animals could not think (he's more liberal on that now). Of course that is backwards, thought precedes talk by a very long way. Understanding sea slugs is indeed a big help.
PS Kevin and Gerardo, "Discuss Amongst Yourselves" is a reference to a popular humor show in the US, just a joke!
(Also thanks and a tip o' the hat to Brood's Philosophy Power Blogroll for the shout-out.)
Tuesday, July 1, 2008
Is Your Brain Somebody?
Gerardo Primero, a psychologist in Buenas Aires, has been corresponding with me via e-mail. He is studying Wittgenstein and wanted to talk about Bennett and Hacker's 2003 book Philosophical Foundations of Neuroscience. That book takes a Wittgensteinian approach (P. M. S. Hacker is one of the leading philosophical interpreters of Wittgenstein) and builds an argument that a great deal of cognitive studies makes some version of the "mereological fallacy," the fallacy of attributing to the parts of a thing properties that are had only by the whole. Specifically "persons," who are full embodied beings, think, dream, desire, imagine, and so forth, whereas much philosophical psychology attributes these intentional states to brains, to consciousness, to memory, and so forth. The idea is that just as I eat lunch, not my stomach, so too I think about the election, vs. my brain. Note that if this turns out to be right, that psychological predicates are applied to persons and not to brain states, then the metaphysical problem about how the physical properties of the nervous system "map on" to the semantic properties of the representations may be shown to be a pseudoproblem, in that intentional psychological descriptions just aren't descriptions of states of the brain. Mind does not necessarily = brain.
Let me quote a little from Gerardo's e-mail from Sunday: "I'm not convinced by Hacker's arguments....The problem with the 'mereological fallacy' is not that applying psychological terms to parts 'has no sense': it has sense, but it's scientifically unsound...While my argument is epistemic ("that's not a valid scientific explanation"), Hacker's argument is semantic ("that has no meaning at all").
There are a lot of directions we could go with this, but since Gerardo seemed to approach me for maybe a "philosopher's opinion," I'll talk some basic metaphysics and epistemology this afternoon. The issue is metaphysical, to my eye: there is a language about "properties," and so we want to get clear on what properties are, because it looks like we would need to do that to understand how the brain works (properties are causal). Specifically the "property" of interest in terms of the mind/body problem is the "intentional/semantic property." What is this? That bears some discussion, but note a basic issue: if you think that the semantic property is a property, but it's not a physical property, then you have signed on to some kind of metaphysical dualism. Descartes thought this way. He thought that any physical thing, being ultimately a mental representation, had the property of dubitability (could be unreal, an illusion), whereas the fact of thinking (of a "thinking substance") was indubitable, and this is one of his arguments for metaphysical dualism (sometimes called "substance dualism"). Disparate properties, disparate things. Which is fine, maybe, but recognize the committments that come with such a view: a) there are "things" that exist that are not part of the physical universe, and b) therefore, in this example of the more general metaphysical point, scientific psychology is impossible. I don't buy that. That is, I think that humans are part of physical nature through-and-through. And if "physicalism" means anything, it's got to mean that everything about humans that we can "explain" (whatever explanation is) we can explain in physical terms (just like the rest of nature). So a naturalist like myself has two options: 1) Try to understand "mental representation" and thus symbols and meaning in general in some kind of physical terms, or 2) try to eliminate representational content from the model of mind.
So, as to Gerard's distinction between "meaningful" and "explanatory," I would say that physicalists (we could here say materialists or naturalists, I'm not making any fine distinction) who are eliminativists (like Wittgenstein and Skinner) think that to the extent that "meaning" is not the same thing as "causal power" there isn't any such thing. Think of a behavioristic, anthropological account of the development of speech: the latter-day "semantics" of the words emerged out of the functional role of making that sound. It isn't true that all words function in the same way (that is, as symbols). This is what Wittgenstein means with the analogy of the locomotive controls: they all fit the human hand, but one opens a valve, one puts on a brake, etc.; it is a mistake to try to explain them all the same way.
If you can't explain the "mental" property without including something "mental" in the explanation, then you haven't explained mind. An "explanation" of mind would be the story of how semantic properties emerged from simpler, non-semantic properties. Mind from no-mind. So a problem with representations is that they already assume mind. Semantic content needs an interpreter. Or, the story about how something came to "mean" something can't already assume that "meaningfulness" exists - if you have to do that, you haven't succeeded in naturalizing the concept of "meaning."
There is a contingent who want to develop a natural theory of information. I would recommend starting with Fred Dretske's Knowledge and the Flow of Information. For myself, at this point I feel pretty convinced that there can't be any such thing as mental content, at all. Just wrong, root and branch. But note that there is a representational vogue underway amongst the cognitive scientists (or was two years ago).
Looked at this way, one can see that the problem with attributing mental states to brains isn't, I wouldn't say, meaningful but wrong (as Gerardo argues), but in fact not meaningful. They are pseudoexplanations because they don't turn out to even potentially explain anything: they're not even wrong. "When I remember her face, I have an image of her face." "I just gave myself a dollar." Both examples of the same mistake.
Finally for today, Gerardo wanted a little more on Wittgenstein vs. Moore. Moore tried to argue from "usage," that is, he argued that the claim "I know I have a hand" was a paradigmatic case of knowledge. Wittgenstein objected (in On Certainty) that there was no ordinary circumstance in which holding up one's hand and saying, "I know I have a hand" could have any purpose. W.'s point was that Moore made the mistake of continuing to play the game that was the cause of the confusion in the first place. In fact I neither know nor do not know whether I have a body; that's not really an example of a situation where the verb "to know" can serve any function.
Let me quote a little from Gerardo's e-mail from Sunday: "I'm not convinced by Hacker's arguments....The problem with the 'mereological fallacy' is not that applying psychological terms to parts 'has no sense': it has sense, but it's scientifically unsound...While my argument is epistemic ("that's not a valid scientific explanation"), Hacker's argument is semantic ("that has no meaning at all").
There are a lot of directions we could go with this, but since Gerardo seemed to approach me for maybe a "philosopher's opinion," I'll talk some basic metaphysics and epistemology this afternoon. The issue is metaphysical, to my eye: there is a language about "properties," and so we want to get clear on what properties are, because it looks like we would need to do that to understand how the brain works (properties are causal). Specifically the "property" of interest in terms of the mind/body problem is the "intentional/semantic property." What is this? That bears some discussion, but note a basic issue: if you think that the semantic property is a property, but it's not a physical property, then you have signed on to some kind of metaphysical dualism. Descartes thought this way. He thought that any physical thing, being ultimately a mental representation, had the property of dubitability (could be unreal, an illusion), whereas the fact of thinking (of a "thinking substance") was indubitable, and this is one of his arguments for metaphysical dualism (sometimes called "substance dualism"). Disparate properties, disparate things. Which is fine, maybe, but recognize the committments that come with such a view: a) there are "things" that exist that are not part of the physical universe, and b) therefore, in this example of the more general metaphysical point, scientific psychology is impossible. I don't buy that. That is, I think that humans are part of physical nature through-and-through. And if "physicalism" means anything, it's got to mean that everything about humans that we can "explain" (whatever explanation is) we can explain in physical terms (just like the rest of nature). So a naturalist like myself has two options: 1) Try to understand "mental representation" and thus symbols and meaning in general in some kind of physical terms, or 2) try to eliminate representational content from the model of mind.
So, as to Gerard's distinction between "meaningful" and "explanatory," I would say that physicalists (we could here say materialists or naturalists, I'm not making any fine distinction) who are eliminativists (like Wittgenstein and Skinner) think that to the extent that "meaning" is not the same thing as "causal power" there isn't any such thing. Think of a behavioristic, anthropological account of the development of speech: the latter-day "semantics" of the words emerged out of the functional role of making that sound. It isn't true that all words function in the same way (that is, as symbols). This is what Wittgenstein means with the analogy of the locomotive controls: they all fit the human hand, but one opens a valve, one puts on a brake, etc.; it is a mistake to try to explain them all the same way.
If you can't explain the "mental" property without including something "mental" in the explanation, then you haven't explained mind. An "explanation" of mind would be the story of how semantic properties emerged from simpler, non-semantic properties. Mind from no-mind. So a problem with representations is that they already assume mind. Semantic content needs an interpreter. Or, the story about how something came to "mean" something can't already assume that "meaningfulness" exists - if you have to do that, you haven't succeeded in naturalizing the concept of "meaning."
There is a contingent who want to develop a natural theory of information. I would recommend starting with Fred Dretske's Knowledge and the Flow of Information. For myself, at this point I feel pretty convinced that there can't be any such thing as mental content, at all. Just wrong, root and branch. But note that there is a representational vogue underway amongst the cognitive scientists (or was two years ago).
Looked at this way, one can see that the problem with attributing mental states to brains isn't, I wouldn't say, meaningful but wrong (as Gerardo argues), but in fact not meaningful. They are pseudoexplanations because they don't turn out to even potentially explain anything: they're not even wrong. "When I remember her face, I have an image of her face." "I just gave myself a dollar." Both examples of the same mistake.
Finally for today, Gerardo wanted a little more on Wittgenstein vs. Moore. Moore tried to argue from "usage," that is, he argued that the claim "I know I have a hand" was a paradigmatic case of knowledge. Wittgenstein objected (in On Certainty) that there was no ordinary circumstance in which holding up one's hand and saying, "I know I have a hand" could have any purpose. W.'s point was that Moore made the mistake of continuing to play the game that was the cause of the confusion in the first place. In fact I neither know nor do not know whether I have a body; that's not really an example of a situation where the verb "to know" can serve any function.
Subscribe to:
Posts (Atom)