So internalized is the representational view that one can forget that it didn’t have to be this way. The history of psychology is, like all histories, full of contingencies and precipitous forks in the road. In the study of the history of Western philosophy we call the 17th and 18th centuries the “Early Modern” period, and the contemporary idea that we live in our heads, experiencing only a mental representation of the world, dates from this period. It was an incredibly fertile period for European philosophy: if we take, as most do, Descartes to be the first canonical Early Modern philosopher and Kant to be the last, the whole period is a scant 154 years (from the publication of The Discourse on Method in 1637 to the publication of The Critique of Pure Reason in 1781).
The adjective “Cartesian” literally means that an argument or position reflects the ideas of Descartes, but it has become through usage a more general term that alludes to representational theories of mind, particularly those theories that entail that we must worry about the relationship between the external world and a perceiving subject’s representation of the world – theories that “explain” perception as the formation of representations. This is not entirely fair to Descartes, who wrote in his Dioptics that it would be a mistake to take the inverted image observable on the retina as evidence that there were pictures in the mind, “as if there were yet other eyes in our brain.”
Even if the real Descartes was not someone who today we would call a Cartesian, he can certainly be held responsible in large part for the conspicuous lack of naturalism about psychology in modern philosophy: he was a metaphysical dualist, he thought that humans’ rational capacity comes not from nature but from God (notoriously he made this argument after arguing that he could prove God’s existence through the exercise of rationality), and he was a human exceptionalist who took language as evidence that humans are essentially different from the rest of the natural world. But the real “Cartesian” in the sense of the true ancestor of modern representational theory is Kant.
Kant’s explicit project was to block the naturalization of psychology. He was alarmed by what he saw as the atheistic, amoralist implications of Hume’s empiricism (implications emphasized by Hume himself). Hume’s whole oeuvre can be read as a sustained attack on the very idea of rationality: there are no “rational” proofs of anything, no “rational” reason for believing in anything. Beliefs are the product of “habituation,” the conditioning effect of regularities of experience. Thus there was no basis, on Hume’s view, for asserting the existence of God, of human freedom, or even of the human mind if by that was meant something over and above the contents (the “impressions”) of thought processes, which were the products of experience. Kant seems to have been intuitively certain that these radical conclusions were false, although he was criticized (by Nietzsche for example) for a programmatic development of foreordained conclusions.
Hume’s psychology was inadequate. Like Locke before him he thought that mental content could be naturalized if it was explained as the result of a physical process of perception: interaction with the environment was the physical cause of the impression, a physical effect. This strategy led the empiricists to emphasize a rejection of innate content, which they regarded as a bit of bad rationalist metaphysics. The problem was compounded by a failure to distinguish between innate content and innate cognitive ability. To some extent this failure reflected a desire to strip psychology down to the simplest perception/learning theory possible in the interest of scientific method, coupled with a lack of Darwinian ideas that can provide naturalistic explanations of innate traits (I will address the skeptical, “phenomenalist” reading of Hume, that I think is incorrect, in Chapter Three).
Kant saw this weakness and was inspired to develop the argument of the Critique of Pure Reason. Hume claimed that all knowledge was the result of experience. Kant’s reply was to ask, “What is necessary in order for experience to be possible?” The greatness of Kant is in his effort to backwards-engineer the mind. He is best read today as a cognitive scientist. However people forget how radical Kant’s conclusions were, and how influential they have continued to be, one way or another, to virtually all philosophers and psychologists since the late 18th century. From the persuasive argument that the mind must somehow sort and organize the perceptual input (that’s the part of psychology that the empiricists’ ideology led them to neglect), Kant goes on to argue that space, time, cause and effect relations and the multiplicity of objects are all part of the “sensible” frame that the mind imposes on our experience of the world. The world of our experience is the phenomenal world, and it is that world that is the subject of natural science; the world-in-itself is the noumenal world (and quite the bizarre, Parmenidean world it is!).
Two points are important here. First, Kant’s aim was to protect human psychology (and religion and ethics) from a godless, amoral, reductive natural science and in that he succeeded to an alarming extent. The world of natural science on the Kantian view is the world as it is conceived by the rational mind, and as such the rational mind itself cannot be contained in it. Second, Kant’s biggest contribution of all is easy to miss precisely because it is so basic to his whole line of argument: the phenomenal world is a representation, made possible by the framing structure of rational conception, just as the drawing on the Etch-a-Sketch depends on the plastic case, the internal mechanism and the knobs of the toy.
The defender of Kant will argue that the Kantian phenomenal world is not a representation at all: it is the world presented to us in a certain way. It is also only fair to point out that Kant, unlike his modern descendents, shared with Plato the view that all rational minds were identical to the extent that they were rational. Kant would not have been amused by 20th century philosophers’ pictures of a world where each language, culture and individual were straying off, like bits some expanding universe, into their private “conceptual schemes,” ne’er the twain to meet. Nonetheless Kant needs mental representation (and any conceptual schemata is representational), because he needs to protect freedom, rationality, God and ethics. Thus a deep skepticism is intentionally built in to Kant’s system (as it is not in Descartes’). While Kant is right in a great many things and any student of philosophy or psychology must read and understand him, on these two points his influence is ultimately pernicious.
I dilate on the Kantian history of the representational theory because once we see that the issues that confront us in philosophy of mind continue to be essentially metaphysical we also see that they are very old issues, and ones that connect up with many other perennial philosophical problems. Too many people in contemporary philosophy of mind and cognitive science fail to appreciate this and the discussion is very much the poorer for that. Furthermore it’s important to see that things didn’t have to be this way. The idea that we are stuck in our heads with our “representation” of the world forever mediating between us and “reality” is actually a very strange idea, but it has been so deeply internalized by so many that we can fail to appreciate how strange it is. This is something to bear in mind as we think about how modern physicalist philosophy of mind has struggled with the problem of mental representation.
Sunday, November 28, 2010
Sunday, November 14, 2010
The Problem of Mental Representation
People tend to be of two minds (pun intended) on the issue of mental content. On the one hand no one can dispute that the way we talk about the mind is largely figurative. The mind is racing and wandering, it has things on it and in it, it is sometimes full and sometimes empty, it is open and narrow and dirty and right. We are used to talking this way, it is useful to talk this way (I don’t think there is anything wrong with our psychological talk), and everybody pretty much understands that this is a discourse full of “figures of speech.” The philosophically-inclined see well enough that “mind” is an abstract concept of some sort. On the other hand we have deeply internalized some of this figurative language, so deeply that one of the most central, perennial problems of epistemology is the alleged problem about the relation of our “inner” perceptions of the world to the “real world” out there, outside of our heads. Many people think that we are stuck inside our heads: a blatant conflation of the literal with the figurative.
Why is this? For one thing when we talk about the mental we must use the language that we have, and this is a language evolved for talking about the physical, “external” world of three-dimensional objects in three-dimensional space. The room has an inside and an outside, and there are things (concrete things) inside it (those chairs and tables that philosophers are always talking about). “Beliefs” and “sensations” are words that take the same noun-role as “chairs” and “tables,” and thus the grammar of the language is constantly pushing us to conceive of these mental terms as referring to some variety of concrete things. This is the sense in which Wittgenstein uses the word “grammar”: to indicate the way that language contains metaphysical suggestions that can lead to confusion. The metaphysical grammar of language is the grammar of three-dimensional objects in three-dimensional space; objects, moreover, that interact with each other according to regularities of cause and effect.
A basic confusion about the mind is that it is a kind of inner space filled with things and (non-physical) processes. It is important to see the close relationship between this pseudo-spatial conception of the mind and the problem of mental representation. Physical things and processes don’t mean anything (or, physical descriptions and explanations of the things and processes in the world don’t refer to the semantic property, only to physical properties). The concept of a symbol is essentially relational: symbols need to be interpreted. For interpretation to happen there must be an interpreter. Pictures, books and computer screens need to be looked at by someone – someone with a mind. Thus the representational model has a “homunculus” problem: in order for the symbol to work it must be read by someone, as streetlights and recipes only “work” when actual people respond to them with appropriate actions. Another way of putting the problem is the “regress” objection: if the theory is that minds work using representations, then the homunculus’s mind must work that way as well, but in that case the homunculus’s mind must contain another homunculus, and so on.
Some cognitive scientists have tried to overcome this objection by suggesting that a larger neural system of cognition can be modeled as responding to information from neural subsystems without succumbing to the homunculus fallacy, but this strategy can’t work if a “representational” theory of mind is one that posits representations as necessary for thought. A theory of mind that succeeds in naturalizing psychology will be one that shows how the “mental” emerges from the non-mental. Any theory that includes anything mental in the first place accomplishes nothing. The concept of a representation is a mental concept by definition: the verb “to represent” presumes the existence of an audience. Representation, like language, cannot be a necessary precondition for thought for the simple enough reason that thought is a necessary precondition for both representation and language (a being without thoughts would have precious little to talk about!). This is not a chicken-and-the-egg question.
There is an important discussion here with the computationalists, who think that the mind/brain is a kind of computer. If it is the representations that bear logical relations to one another (the computationalist argues), and rationality consists in understanding and respecting those relations, then rationality requires a representational (typically thought of as some sort of linguistic) architecture. If computation is formal rule-governed symbol manipulation then symbols are necessary for computation/cognition. Jerry Fodor, for example, hopes to bridge mind (intentional explanation) and body (physical explanation) by way of syntax, the formal organization of language. The idea is that all of the causal work that would normally be attributed to the content of the representation (say, the desire for water) can be explained instead by appeal to “formal” (syntactic, algorithmic) features of the representation (there is some more discussion of Fodor below).
One challenge to this computationalist (or “strong AI”) view is connectionism, the view that the mind/brain has an architecture more like a connectionist computer (also called parallel distributed processing, PDP; in the wetware literature this is the “neural nets” discussion). In connectionist computing, systems of nodes stimulate each other with electrical connections. There is an input layer where nodes are activated by operators or sensors, programming layers where patterns from the input layer can be used to refine the output, and the output layer of nodes. These connections can be “weighted” by programmers to steer the machine in the right direction. Some of these systems were developed by the military to train sonar systems to recognize underwater mines, for example, but they are now ubiquitous as the face-, handwriting- and voice-recognition programs used in daily life.
Connectionist machines are very interesting for purposes of the present discussion. They appear to be self-teaching, and they appear to function without anything that functions as a symbol. There is still the (human) programmer and there is still nothing that seems like real consciousness, but such a system attached to a set of utilities (so far, the utilities of the programmers) looks to be effective at producing organized behavior and fully explicable in operational terms.
Meanwhile, I’m not even sure that computers have representations in the first place. That is, it’s hard to see anything that functions as a representation for the computer (which is not surprising since it doesn’t look like the computer has a point of view). What makes computers interesting to cognitive science in the first place is that with them we can tell the whole causal story without appeal to representations: the binary code just symbolizes (to us) the machine state (the status of gates in the microprocessors), and we can sustain the machine-level explanation through the description of the programming languages and the outputs. Those “outputs,” of course, are words and images interpreted by humans (mostly). So even “classical” computers do have computational properties and do not have representations. Or perhaps another way to put it is that two senses of “representation” are confuted here: the sense when a human observes a computational process and explains it by saying: “See, that functions as a representation in that process” and the sense when a human claims to interpret a representation. (I will discuss computational properties as “formal” properties in the discussion of the problem of rationality below.)
The computationalist/connectionist discussion is a striking example of how little the larger discussion has changed since the 17th century. It is the rationalist/empiricist, nativist/behaviorist argument rehearsing itself yet again through the development of this technology.
Why is this? For one thing when we talk about the mental we must use the language that we have, and this is a language evolved for talking about the physical, “external” world of three-dimensional objects in three-dimensional space. The room has an inside and an outside, and there are things (concrete things) inside it (those chairs and tables that philosophers are always talking about). “Beliefs” and “sensations” are words that take the same noun-role as “chairs” and “tables,” and thus the grammar of the language is constantly pushing us to conceive of these mental terms as referring to some variety of concrete things. This is the sense in which Wittgenstein uses the word “grammar”: to indicate the way that language contains metaphysical suggestions that can lead to confusion. The metaphysical grammar of language is the grammar of three-dimensional objects in three-dimensional space; objects, moreover, that interact with each other according to regularities of cause and effect.
A basic confusion about the mind is that it is a kind of inner space filled with things and (non-physical) processes. It is important to see the close relationship between this pseudo-spatial conception of the mind and the problem of mental representation. Physical things and processes don’t mean anything (or, physical descriptions and explanations of the things and processes in the world don’t refer to the semantic property, only to physical properties). The concept of a symbol is essentially relational: symbols need to be interpreted. For interpretation to happen there must be an interpreter. Pictures, books and computer screens need to be looked at by someone – someone with a mind. Thus the representational model has a “homunculus” problem: in order for the symbol to work it must be read by someone, as streetlights and recipes only “work” when actual people respond to them with appropriate actions. Another way of putting the problem is the “regress” objection: if the theory is that minds work using representations, then the homunculus’s mind must work that way as well, but in that case the homunculus’s mind must contain another homunculus, and so on.
Some cognitive scientists have tried to overcome this objection by suggesting that a larger neural system of cognition can be modeled as responding to information from neural subsystems without succumbing to the homunculus fallacy, but this strategy can’t work if a “representational” theory of mind is one that posits representations as necessary for thought. A theory of mind that succeeds in naturalizing psychology will be one that shows how the “mental” emerges from the non-mental. Any theory that includes anything mental in the first place accomplishes nothing. The concept of a representation is a mental concept by definition: the verb “to represent” presumes the existence of an audience. Representation, like language, cannot be a necessary precondition for thought for the simple enough reason that thought is a necessary precondition for both representation and language (a being without thoughts would have precious little to talk about!). This is not a chicken-and-the-egg question.
There is an important discussion here with the computationalists, who think that the mind/brain is a kind of computer. If it is the representations that bear logical relations to one another (the computationalist argues), and rationality consists in understanding and respecting those relations, then rationality requires a representational (typically thought of as some sort of linguistic) architecture. If computation is formal rule-governed symbol manipulation then symbols are necessary for computation/cognition. Jerry Fodor, for example, hopes to bridge mind (intentional explanation) and body (physical explanation) by way of syntax, the formal organization of language. The idea is that all of the causal work that would normally be attributed to the content of the representation (say, the desire for water) can be explained instead by appeal to “formal” (syntactic, algorithmic) features of the representation (there is some more discussion of Fodor below).
One challenge to this computationalist (or “strong AI”) view is connectionism, the view that the mind/brain has an architecture more like a connectionist computer (also called parallel distributed processing, PDP; in the wetware literature this is the “neural nets” discussion). In connectionist computing, systems of nodes stimulate each other with electrical connections. There is an input layer where nodes are activated by operators or sensors, programming layers where patterns from the input layer can be used to refine the output, and the output layer of nodes. These connections can be “weighted” by programmers to steer the machine in the right direction. Some of these systems were developed by the military to train sonar systems to recognize underwater mines, for example, but they are now ubiquitous as the face-, handwriting- and voice-recognition programs used in daily life.
Connectionist machines are very interesting for purposes of the present discussion. They appear to be self-teaching, and they appear to function without anything that functions as a symbol. There is still the (human) programmer and there is still nothing that seems like real consciousness, but such a system attached to a set of utilities (so far, the utilities of the programmers) looks to be effective at producing organized behavior and fully explicable in operational terms.
Meanwhile, I’m not even sure that computers have representations in the first place. That is, it’s hard to see anything that functions as a representation for the computer (which is not surprising since it doesn’t look like the computer has a point of view). What makes computers interesting to cognitive science in the first place is that with them we can tell the whole causal story without appeal to representations: the binary code just symbolizes (to us) the machine state (the status of gates in the microprocessors), and we can sustain the machine-level explanation through the description of the programming languages and the outputs. Those “outputs,” of course, are words and images interpreted by humans (mostly). So even “classical” computers do have computational properties and do not have representations. Or perhaps another way to put it is that two senses of “representation” are confuted here: the sense when a human observes a computational process and explains it by saying: “See, that functions as a representation in that process” and the sense when a human claims to interpret a representation. (I will discuss computational properties as “formal” properties in the discussion of the problem of rationality below.)
The computationalist/connectionist discussion is a striking example of how little the larger discussion has changed since the 17th century. It is the rationalist/empiricist, nativist/behaviorist argument rehearsing itself yet again through the development of this technology.
Sunday, November 7, 2010
The Problem of Intentionality
In the last chapter I argued that there is no one thing to which the word “mind” refers. I argued further that there are (at least) two metaphysical problems that are still unresolved in our psychological talk; two kinds of putative mental “properties” that each, in their respective ways, resists naturalization. It may be, though, that spelling out the heterogeneity of mind is progress: for much of the dissatisfaction with operationalist theories is because of their manifest failure to give a satisfactory account of consciousness, while any straightforward materialist account of consciousness appears to run afoul of the issue of “multiple realizability” and “chauvinism.” Once we accept that we have two different topics it may turn out that our current theories are not as inadequate as they seemed; they are only more limited in their scope than we had assumed.
If this is right then one who is interested in the problem of intentionality needn’t necessarily be interested in the problem of consciousness or vice versa. What appeared to be a fairly violent doctrinal schism between the operationalists and the phenomenologists is revealed to be a mere changing of the subject. Of course if a naturalistic semantic of intelligence-predicates and a naturalistic semantic of consciousness-predicates are both necessary but neither sufficient for a complete naturalistic semantic of psychological predicates, then analyses of both semantics will have to be offered. But each semantic and its defense should be free-standing if the heterogeneity argument in the last chapter is true.
The problem of intentionality itself decomposes further into two interrelated but distinguishable problems. The first is the problem of mental representation. Symbols of any kind (including isomorphic representations like paintings and photographs and formal representations like spoken languages and computer codes) have, it seems, the property of meaning (that I will usually call the semantic property or, interchangeably, the intentional property). Symbols refer to, are about, things other than themselves (the neologism “aboutness” also expresses this property), while physical things (or things described and explained in physical terms) do not have any such property (the descriptions and explanations include only physical terms). A naturalized semantic of psychological predicates would be free of reference to non-physical properties, but even our current neurophysiology textbooks have information-processing models of nervous system function (and the popular conception of the mind is of something full of images, information and language).
The operationalist theories of mind developed by English-speaking philosophers during the 20th century are largely a response to the problem of representation, although there are a variety of conclusions: behaviorism is straightforwardly eliminativist about mental content, limiting the possible criteria for use of psychological predicates to intersubjectively observable things. Computationalism, insofar as it holds that minds are formal rule-governed symbol-manipulating systems, aims at radically minimizing the symbol system (as in binary-code machine language for example) but remains essentially committed to some sort of symbolic architecture. Functionalism proposes a psychology that is described purely in functional terms rather than physical terms, which provides for replacing representations with functionally equivalent, non-representational states, but in its very abstraction functionalism does not commit to eliminating representations (functionalism may be more of a method than a theory). In the first half of this chapter I will draw on the work of some latter-day philosophers, generally influenced by Wittgenstein, to develop a semantic of intentional predicates that not only dispenses with any references to mental representation (as behaviorism and functionalism do) but provides an account that actually rules out the possibility of mental content.
The other part of the problem of intentionality is the problem of rationality. Rationality is multiply realizable (a synonymous term is supervenient). To see what this means consider an example from another area of philosophy, “value theory” (an area that encompasses aesthetics and ethics): Say I have a painting hanging on the wall at home. This painting has a physical description, which lists all and only its physical properties: it is two feet across and four feet tall, weighs seven pounds, is made of wood, canvas and oils, is mostly red etc. Rarely, though, does anyone find these physical properties remarkable qua physical properties. Instead my visitors are likely to remark that the painting is beautiful, finely wrought, significant etc. The metaphysical problem is that these aesthetic properties cannot be analyzed into, reduced to or identified with the painting’s particular set of physical properties (notwithstanding the fact that my visitors will appeal to these physical characteristics, as in “That red tone is lovely,” when elaborating on their aesthetic judgment). The aesthetic properties surely emerge, somehow, from this particular combination of physical properties. There could be no change of the physical properties without some change in the aesthetic properties (this is the standard definition of the “supervenient” relationship). But not all objects with these physical properties are necessarily beautiful, nor do all beautiful things have these physical properties.
Rationality is a supervenient property. For example a human being, a dolphin, a (theoretically possible) rational artifact and a (probably existing) intelligent extraterrestrial all instantiate (that is, grasp and make use of) the function of transitivity (“If X then Y, if Y then Z, therefore if X then Z”). But these beings are made of various materials organized in various ways. There are no physical properties that fix the extension of the set of rational beings and so this set, like the set of beautiful things, is indefinitely large. Another way of saying the same thing is to say that there are no psychophysical laws regarding rationality, generalizations to the effect that any being with such-and-such logical capacity must have such-and-such physical characteristics or vice versa.
The problem of mental representation and the problem of rationality can be distinguished as separate metaphysical problems. We would still be confronted with the problem of rationality even if we did not subscribe (that is, if none of us subscribed) to a representational theory of mind. Nonetheless the two sub-problems should be grouped together under the general rubric of the problem of intentionality, because both are problems for the same set of psychological predicates, the intentional predicates: “believes,” “desires,” “hopes,” “fears” etc. Intentional predicates name states that apparently entail mental content, as one believes that X, fears that Y etc., and also apparently entail rationality, as it is only explanatory when I say to you of a person that he left the room because he was thirsty if we share the background assumption that, if he believes that there is water at the fountain and desires to have water then, all other things being equal, he will go to the fountain (this is commonly referred to as the rationality assumption).
Some philosophers will claim at this point that the necessity of the rationality assumption for intentional explanation blocks naturalization. The argument is that it is the propositions (“I have water to drink,” “There is water at the fountain down the hall”) that bear logical relations to one another. If these propositions are not identical to their various physical tokens then they are non-physical entities (this kind of view is often called “Platonic realism,” that is realism about non-physical entities). This argument also counts against my claim that the two problems of intentionality can be separated if it turns out that tokens of propositions are necessary for logical thinking.
A related worry that also apparently ties the two problems of intentionality together is about the causal role of content (“the problem of mental causation”): The man is running because he wants to get away from the tiger that is chasing him. If a physical description of his brain and the processes occurring there does not convey that he is being chased by a tiger, not only does it fail to provide the kind of explanation we want (we want to know the reason he is running), it also appears to fail to describe what is happening “in his own head,” since the perception of an attacking tiger is part of the cause of his action.
I think that I can provide a satisfactory response to the problem of propositions as bearers of logical relations, although the result is somewhat surprising in the context of the overall physicalist project of this book. However the problem of mental representation will be discussed first, because it is important to see that even if we were to reject the representational theory of mind (as I think we should) we would still be confronted with the problem of rationality. The question of rationality takes us a good deal further into general metaphysics.
If this is right then one who is interested in the problem of intentionality needn’t necessarily be interested in the problem of consciousness or vice versa. What appeared to be a fairly violent doctrinal schism between the operationalists and the phenomenologists is revealed to be a mere changing of the subject. Of course if a naturalistic semantic of intelligence-predicates and a naturalistic semantic of consciousness-predicates are both necessary but neither sufficient for a complete naturalistic semantic of psychological predicates, then analyses of both semantics will have to be offered. But each semantic and its defense should be free-standing if the heterogeneity argument in the last chapter is true.
The problem of intentionality itself decomposes further into two interrelated but distinguishable problems. The first is the problem of mental representation. Symbols of any kind (including isomorphic representations like paintings and photographs and formal representations like spoken languages and computer codes) have, it seems, the property of meaning (that I will usually call the semantic property or, interchangeably, the intentional property). Symbols refer to, are about, things other than themselves (the neologism “aboutness” also expresses this property), while physical things (or things described and explained in physical terms) do not have any such property (the descriptions and explanations include only physical terms). A naturalized semantic of psychological predicates would be free of reference to non-physical properties, but even our current neurophysiology textbooks have information-processing models of nervous system function (and the popular conception of the mind is of something full of images, information and language).
The operationalist theories of mind developed by English-speaking philosophers during the 20th century are largely a response to the problem of representation, although there are a variety of conclusions: behaviorism is straightforwardly eliminativist about mental content, limiting the possible criteria for use of psychological predicates to intersubjectively observable things. Computationalism, insofar as it holds that minds are formal rule-governed symbol-manipulating systems, aims at radically minimizing the symbol system (as in binary-code machine language for example) but remains essentially committed to some sort of symbolic architecture. Functionalism proposes a psychology that is described purely in functional terms rather than physical terms, which provides for replacing representations with functionally equivalent, non-representational states, but in its very abstraction functionalism does not commit to eliminating representations (functionalism may be more of a method than a theory). In the first half of this chapter I will draw on the work of some latter-day philosophers, generally influenced by Wittgenstein, to develop a semantic of intentional predicates that not only dispenses with any references to mental representation (as behaviorism and functionalism do) but provides an account that actually rules out the possibility of mental content.
The other part of the problem of intentionality is the problem of rationality. Rationality is multiply realizable (a synonymous term is supervenient). To see what this means consider an example from another area of philosophy, “value theory” (an area that encompasses aesthetics and ethics): Say I have a painting hanging on the wall at home. This painting has a physical description, which lists all and only its physical properties: it is two feet across and four feet tall, weighs seven pounds, is made of wood, canvas and oils, is mostly red etc. Rarely, though, does anyone find these physical properties remarkable qua physical properties. Instead my visitors are likely to remark that the painting is beautiful, finely wrought, significant etc. The metaphysical problem is that these aesthetic properties cannot be analyzed into, reduced to or identified with the painting’s particular set of physical properties (notwithstanding the fact that my visitors will appeal to these physical characteristics, as in “That red tone is lovely,” when elaborating on their aesthetic judgment). The aesthetic properties surely emerge, somehow, from this particular combination of physical properties. There could be no change of the physical properties without some change in the aesthetic properties (this is the standard definition of the “supervenient” relationship). But not all objects with these physical properties are necessarily beautiful, nor do all beautiful things have these physical properties.
Rationality is a supervenient property. For example a human being, a dolphin, a (theoretically possible) rational artifact and a (probably existing) intelligent extraterrestrial all instantiate (that is, grasp and make use of) the function of transitivity (“If X then Y, if Y then Z, therefore if X then Z”). But these beings are made of various materials organized in various ways. There are no physical properties that fix the extension of the set of rational beings and so this set, like the set of beautiful things, is indefinitely large. Another way of saying the same thing is to say that there are no psychophysical laws regarding rationality, generalizations to the effect that any being with such-and-such logical capacity must have such-and-such physical characteristics or vice versa.
The problem of mental representation and the problem of rationality can be distinguished as separate metaphysical problems. We would still be confronted with the problem of rationality even if we did not subscribe (that is, if none of us subscribed) to a representational theory of mind. Nonetheless the two sub-problems should be grouped together under the general rubric of the problem of intentionality, because both are problems for the same set of psychological predicates, the intentional predicates: “believes,” “desires,” “hopes,” “fears” etc. Intentional predicates name states that apparently entail mental content, as one believes that X, fears that Y etc., and also apparently entail rationality, as it is only explanatory when I say to you of a person that he left the room because he was thirsty if we share the background assumption that, if he believes that there is water at the fountain and desires to have water then, all other things being equal, he will go to the fountain (this is commonly referred to as the rationality assumption).
Some philosophers will claim at this point that the necessity of the rationality assumption for intentional explanation blocks naturalization. The argument is that it is the propositions (“I have water to drink,” “There is water at the fountain down the hall”) that bear logical relations to one another. If these propositions are not identical to their various physical tokens then they are non-physical entities (this kind of view is often called “Platonic realism,” that is realism about non-physical entities). This argument also counts against my claim that the two problems of intentionality can be separated if it turns out that tokens of propositions are necessary for logical thinking.
A related worry that also apparently ties the two problems of intentionality together is about the causal role of content (“the problem of mental causation”): The man is running because he wants to get away from the tiger that is chasing him. If a physical description of his brain and the processes occurring there does not convey that he is being chased by a tiger, not only does it fail to provide the kind of explanation we want (we want to know the reason he is running), it also appears to fail to describe what is happening “in his own head,” since the perception of an attacking tiger is part of the cause of his action.
I think that I can provide a satisfactory response to the problem of propositions as bearers of logical relations, although the result is somewhat surprising in the context of the overall physicalist project of this book. However the problem of mental representation will be discussed first, because it is important to see that even if we were to reject the representational theory of mind (as I think we should) we would still be confronted with the problem of rationality. The question of rationality takes us a good deal further into general metaphysics.
Subscribe to:
Posts (Atom)