Sunday, December 26, 2010

An Externalist Account of Intentional Predicates

Consider Hilary Putnam’s “Twin Earth” argument. Imagine, the argument goes, a Twin Earth: one that is molecule for molecule identical to this Earth. Of course you will have a Twin Earth doppelganger, molecule for molecule identical to yourself. On a reductive materialist account, granting that you have physically identical brain states it follows that you have psychologically identical mental states. For example you will have identical beliefs about water, the colorless, odorless substance that is ubiquitous in your parallel worlds. But imagine that there was just one difference between Earth and Twin Earth: on Earth water is composed of H20, but on Twin Earth water is composed of XYZ. Now everything about you and your doppelganger – both your physical states and your mental contents – are identical, but they’re not the same. Your beliefs may be correct and his/hers false (he/she may believe, as you do, that water is H20), but at a minimum they are about different things. Thus the actual meaning of an intentional state cannot be determined either by its physical properties or, startlingly, by its mental properties (its causal properties that can only be explained with reference to its contents).

Another gedanken from Putnam: Imagine an ant is walking in the sand, behaving normally for an ant for the usual ant reasons. It leaves a trail that resembles a portrait of Winston Churchill. Is this a representation of Winston Churchill? No it is not, even though it has the same physical properties as an actual sketch of Winston Churchill: how it came to be is relevant to its status as a representation. Something is not a representation by virtue of resemblance alone: Winston Churchill, after all, is not a representation of a sketch of himself or of an ant trail that looks like him.

Putnam’s famous slogan is “meaning just ain’t in the head.” The meaning of an intentional state is determined not only by physical and mental properties intrinsic to the subject (“in the subject’s head”) but also by facts about the environment and the subject’s relationship with the environment. This view is called “externalism,” and is also referred to as the “wide content” view: the view that intentional predicates refer to relationships between the nominal subject of intentional predication and their environment.

The representational theory of mind is deeply entrenched and some disambiguation is necessary to avoid outlandish interpretations of externalism and its claims. In this book I am examining the meanings of psychological predicates; in this chapter, intentional predicates. So for me the “meaning” in question is the meaning of words like “believes,” “desires,” “hopes,” “fears” and so on, in the sense of ontological reference: what ontological commitments do we make when we use these words? When Putnam says that “meaning isn’t in the head,” he isn’t talking about the meaning of the word “belief” (my topic), he’s talking about meaning in the sense that the alleged proposition that is the alleged object of the attitude is supposed to be about something: the proposition is the thing that is about something, on the representational realist view, and the proposition is represented “in the head.”

It is a consequence of externalism that beliefs (or any intentional states) aren’t about anything at all. That’s not how they work. They’re not even about the world, let alone internal representations of the world, because what they actually (ontologically speaking) are are relationships between persons and their environments, and relationships aren’t about anything, any more than physical objects or dispositions to behave are about anything. And that is exactly what naturalization requires: that there is no longer any reference to anything that “means” anything, meaning being a non-physical and therefore non-existent “property.” On a functional-role semantics words themselves aren’t about anything, not even proper nouns, so they can’t serve as magical vessels of meaning as they do on a traditional view. Language is a way that persons have of doing things - whole embodied persons situated in particular environments.

Here I am going beyond Putnam (undoubtedly an unwise thing to do!), because Putnam helps himself to mental content even as he argues that mental content alone is not sufficient to determine meaning. But if meaning, understood as “wide content,” turns out to be a description of relationships between persons and environments then there cannot be any mental content. There are no propositions transcendently emanating from Platonic heaven whether or not some person adopts an attitude towards them. There are only specific, concrete instances of states of affairs (I will use “states of affairs” as a more economical way of saying “relationships between persons and their environments;” this is consistent with the more standard use of the phrase as referring to ways the world could be) and goal-directed, token incidents of language-use.

Is Santa Claus a problem? Can’t one be thinking about Santa Claus even though Santa Claus is not part of the environment? No, Santa Claus is not a problem because Santa Claus is a cultural convention and that counts as part of the environment. One is thinking, in the Santa Claus case, not about a mythical character that is actual because mythical characters are not actual. That’s what “mythical” means. As for misrepresentation (as in the case of the five-year-old who believes that Santa Claus is actual), this is something that can only be demonstrated operationally.

What about a person or creature or what have you that exist only in the imagination of one individual? A stranger who appears in a dream, say? Here the right response is to remind ourselves that we are talking about the semantics of intentional predication, not about private experience. In fact the argument is that there can be no public description of private experience; remember Wittgenstein’s beetle-in-the-box. Note that, to the extent that “interpreting representations” is thought of as a process with a phenomenal component (after all, mustn’t there be “something that it’s like” to interpret a representation?), the putative experience of a representation (can one interpret a representation without experiencing it?) cannot be the criterion for proper intersubjective use of the word (see the discussion of phenomenal predicates in Chapter Three).

But surely dreams are evidence of mental representation? Aren’t dreams, in fact, just direct experiences of mental representation? No: although mental processes often involve experiences that seem similar to inspecting representations, remember that there is no explanatory value in literally positing mental representations. They don’t help to explain dreaming any more than they help to explain perceiving, remembering or imagining. In fact they make the model of the mental process considerably more complicated and difficult; a good reason for denying them. Berkeley thought that to clear up the Lockean mess of properties of objects-in-themselves, properties of objects to cause perceptions, and properties of perceptions, either the “mental” or the “external world” had to go. On that point he was right.

A little bit more disambiguation: my intentions here are perhaps deceptively arcane. I am focused on the metaphysics of the mind-body relationship. When I argue that, metaphysically speaking, there are in fact no such things as “mind” or “meaning,” I am arguing that traditional notions of those concepts are currently misleading us in our efforts to understand how the nervous system works. I don’t think any radical change in the way we talk is called for. In fact it is my view that a great deal of our psychological talk is ineliminable, and I think this goes for “mind,” “reference” and “meaning” just as much as for “belief” and “desire” and “beauty” and “justice,” and for the same reasons. If one accepts the present argument against mental representation, it still makes as much sense as it ever did to ask “What are you thinking about?” or “What is that book about?” or “What does that word mean?” The metaphysical question is about the proper semantic analysis of the way that we have always talked; that is nothing like a critique. As Wittgenstein said, “Philosophy changes nothing.” If the present proposal that intentional predicates pick out relationships between embodied persons and their environments is sound then that has always been what we have been doing. Jettisoning realism about mental representations is a substantial matter for cognitive science – I would stress the importance for developing experimental paradigms in brain science – but it’s hard to see how it could have any effect on popular usage of intentional terms.

To summarize, intentional predicates are applied to whole persons in particular environments, not to brains or nervous systems or neural processes or any physical part of persons. Perceiving, imagining, thinking, remembering and so on are the kinds of things that whole persons do. Among these intentional activities is interpreting. Symbols are interpreted by persons, and thus symbols must be located where persons are located: in the world. It follows that language, a symbol system, is a feature of the person’s environment as well: there are no symbols in the head. There is not, ontologically speaking, any such thing as meaning: there are only particular acts of persons negotiating their environments with use of sounds and symbols (and, following the mereological fallacy argument, this will be true of all language use including idle musings, random thoughts etc). The use of the word “meaning” as applied to intentional predication (“What is he thinking about?” “What does she know about gardening?” etc) is partially constituted by facts about the environment (pace Putnam). The natural semantic for intentional predicates is that they refer to relationships between individuals and their environments. If the account given here is persuasive there cannot be any such things as “mental representations.”

Monday, December 20, 2010

Propositional Attitudes

When Bertrand Russell coined the phrase “propositional attitude” in his 1921 book The Analysis of Mind, he wasn’t thinking of “proposition” in the sense of a piece of language. He was thinking that what was represented was a situation or what would today most likely be called a “state of affairs,” a way the world could be. However several considerations led subsequent philosophers of mind to take a much more literal view of propositions as linguistic entities and as the objects of the attitudes.

Think of a tiger. Alright: now, how many stripes does your imaginary tiger have? Probably your “mental image” of a tiger turned out not to have a specific number of stripes. But a pictorial representation of a tiger would have to. Linguistic (formal) systems can include relevant information and need not contain irrelevant information, an obvious adaptive advantage over isomorphic (pictorial) representation. Formal representation was more congenial to the operationalists (such as computationalists) who wanted to develop functional models of cognition. Then in the late 1950s the linguist Noam Chomsky, critiquing behaviorism, made the enormously influential proposal that formal syntactical structure was “generative”: grammatical forms like “The_is_the_” allowed for multiple inputs and thus indefinitely many (linguistic) representations. Taken to its extreme this argument appears to show that it is necessary for a being to have a formal system for generating propositions to be capable of being in an intentional state at all. Finally the argument that it is propositions that have the property of meaning and that it is propositions that bear logical relations to each other made it seem that a linguistic theory of representation made progress on the mind-body problem.

On my view this is mistaken: the representational model of mind, by definition, locates “mental content” “in the head.” The basic metaphysical problem with the representational model has by now been made clear: “meaning,” what I have been calling the “intentional property” or the “semantic property,” is an irredeemably non-physical “property” that must be washed out of any naturalistic theory of mind. Once one recognizes that intentional predicates are predicated of whole persons – once one sees that positing mental representations necessarily commits the mereological fallacy – the matter is settled. However there is a tight network of arguments and assumptions about intentional states as “propositional attitudes” that will have to be disentangled to the satisfaction of readers who are disposed to defend representations.

The defender of propositional attitudes will start by pointing out that intentional states can only be individuated by virtue of their respective contents. What makes Belief X different from Belief Y is that X is about Paris and Y is about fish. This looks like a block to reduction: to correlate electrochemical activity in the brain, say, with Belief X, we must already be able to specify which belief Belief X is. We don’t have any way of getting from no-content to content (from the non-mental to the mental). This motivates the problem of mental causation: it appears that the content (meaning) of the proposition is what plays the causal role in the production of behavior: when told to proceed to the capital of France he went to Paris because he believed that “Paris is the capital of France.” All the explanation in physical (neurophysiological) terms one could possibly make wouldn’t be explanatory if it didn’t at some point reveal the meaning that is expressed in the proposition, and it doesn’t: “He believes that Paris is the capital of France” is not shorthand for a causal chain of neurophysiological processes.

Donald Davidson famously pointed out a further problem for the development of “psychophysical laws” (as he called them), laws that systematically identified brain processes with particular instances of intentional thought: no one propositional attitude could ever suffice as the discrete cause of a behavior because the causal implication that the propositional attitude has for the acting subject necessarily emerges from the logical relations that that “attitude” has with all of the other intentional states of the subject. Davidson’s phrase for this was “meaning holism,” the view that meaning (in the sense of explanatory psychological predication) is a property of networks of propositional attitudes, not of individual ones. There is not an assortment of individual intentional states in a person’s mind such that one or another might be the proximate cause of behavior; each person has one “intentional state,” the sum of the logically interrelated network of propositional attitudes.

Propositions are the bearers of logical relations with each other. Physical objects and processes, the argument goes, have no logical relations with each other. To believe that the drinking fountain is down the hall is to have the attitude towards the proposition “The drinking fountain is down the hall” that it is true, and to have a desire for water is to have the attitude towards “I have water” that one wants to make that proposition true. The explanatory utility of the intentional predicates – in this case the ability to make an inference from their coincidence that, all other things being equal, the subject will form an intention to walk to the fountain (that is, to make the proposition “I have walked to the fountain” true) – depends on the meaning of the propositions.

Of course making such an inference from the logical relationship between the two propositions also requires that we make a rationality assumption: we must assume about the subject that he, too, will appreciate these logical relations. That is part of the metaphysical “problem of rationality.” I cannot pretend that the distinction between the problem of rationality and the problem of representation is entirely clear-cut, but at this point I need only present a semantic for intentional predicates that locates logical relations out in the world rather than in the head; that will suffice to defeat the argument that mental content is necessary to explain the causal role of intentional states. The further metaphysical problem about the supposed lack of any correlation between the (contingent) physical relationships between states and processes in the body and the (necessary) logical relationships between propositions is dealt with in the discussion of the problem of rationality that is the second half of this chapter.

The terminal station for the line of argument that meaning is an indispensable property (and thus that representations are an ineliminable feature) of intentional explanation is Platonic realism about propositions. On this view, their role as individuators of intentional attitudes and as bearers of logical relations demonstrates that propositions are matter-independent, mind-independent “abstract objects,” ineliminable from ontology. Taking concrete sentences as the “tokens” and propositions as the “types,” the Platonic realist argues that propositions resist a standard nominalist treatment: a “proposition” cannot be simply the name of the set of all of the concrete sentences that express it. The Platonic realist appeals to our intuition that an unexpressed proposition is still a proposition. The fact that propositions can be translated into multiple languages is taken as a demonstration that propositions are not identical to their concrete sentence exemplars.

Wittgenstein proposes an alternative, behaviorist account of language. Wittgenstein’s famous dictum is that meaning is use. The “meaning” of a word, on this view, is whatever the user (speaker or writer) of the word accomplishes by the action of using the word. This alternative to traditional theories of meaning is often called “functional-role semantics.” Wittgenstein rejects the Platonic picture of concepts as essences: the property-in-itself, as distinct from any and all of the concrete exemplars of the property. Language use, he argues, is a type of behavior that reflects a “mode of life,” in the present case the mode of life of human beings. There are no essential meanings (there is no such thing as “meaning” in the traditional sense at all), just patterns of human behavior that can be roughly sorted out on the basis of resemblances and shared histories (these are language “games”). We may gather together statements about “justice” and note that they have similar contexts of use and similar implications for action, just as all of the members of a family can be linked through chains of family resemblance, but that is all. There can be no representation of justice because there is nothing to represent, just as a family of human beings has no “family avatar.” This argument generalizes to all words and their uses, not only those that we think of as naming “concepts.”

If this is right then we are entitled to nominalism about “propositions” after all. A proposition is nothing more than all of the sentence-tokenings of that particular string of symbols. In fact language loses its supposed “interior,” the meaning traditionally supposed to be within or behind the symbol, just as “mind” can now be seen as intelligible patterns of behavior of persons rather than as something “in the head.” Wittgenstein’s vision was to see everything as surface only, both in the case of mind and in the case of language. Psychological description and explanation, understood as an intersubjective discipline limited by the limits of language itself, was necessarily operational.

Now I can sketch out the first natural semantic, the one that replaces intentional predicates that attribute mental representations to persons.

Sunday, December 12, 2010

The mereological fallacy

Stomachs don’t eat lunch. Eating lunch is something that a whole, embodied person does. We understand the role that stomachs play in the lunch-eating process; we appreciate that people can’t eat lunch without them. Brains don’t think. They don’t learn, imagine, solve problems, calculate, dream, remember, hallucinate or perceive. To think that they do is to commit the same fallacy as someone who thought that people can eat lunch because they have little people inside them (stomachs) that eat lunch. This is the mereological fallacy: the fallacy of confusing the part with the whole (or of confusing the function of the part with the telos, or aim, of the whole, as Aristotle, who once again beat us to the crux of the problem, would say).

Nor is the homunculus a useful explanatory device in either case. When I am asked how we might explain the workings of the mind without recourse to mental representations, the reply is that we fail to explain anything at all about the workings of the mind with them. “Remembering my mother’s face is achieved by inspecting a representation of her face in my mind.” This is explanatorily vacuous. And if reference to representations does nothing to explain dreaming, imagining and remembering, it is particularly egregious when mental content is appealed to for an explanation of perception itself, the original “Cartesian” mistake from which all of the other problems derive. A person is constantly developing and revising an idea of his or her world; you can call it a “picture” if you like (a “worldview”), but that is figurative language. A person does not have a picture inside his or her body. Brains don’t form ideas about the world. That’s the kind of thing people do.

This original Cartesian error continues to infest contemporary cognitive science. When the brain areas in the left hemisphere correlated with understanding speech light up and one says, “This is where speech comprehension is occurring,” the mereological fallacy is alive and well. Speech comprehension is not something that occurs inside the body. Persons comprehend speech, and they do it out in the “external” world (the only world there is). Positing representations that exist inside the body is an instance of the mereological fallacy, and it is so necessarily, by virtue of the communicative element that is part of the definition of “representation,” “symbol” etc. Neither any part of the brain nor the brain or nervous system considered as a whole interprets anything. The key to a natural semantic of intentional predicates is the realization that they are predicated of persons, whole embodied beings functioning in relation to a larger environment.

This realization may also be momentous for brain science. Go to the medical school bookstore, find the neurophysiology textbooks and spend a few minutes perusing them. Within the first minutes you will find references to the movement of information (for example by the spinal column), maps (for example on the surface of the cortex), and information processing (for example by the retina and in the visual cortex) and so on. (Actually I suspect that brain scientists are relatively sophisticated in their understanding of the figurative nature of this kind of language compared to workers in other areas of cognitive science; the point is just that representational talk does indeed saturate the professional literature through and through.) But if brain function does not involve representations then we don’t know what brains actually do, and furthermore the representational paradigm is in the way of finding out: the whole project needs to be reconceived. If there is any possibility that this is true at all then these arguments need to be elaborated out as far as they can be.

Taking the argument from the mereological fallacy seriously also draws our attention to the nature of persons. It follows from what has been said that the definition of “person” will be operational. Operational definitions have an inevitably circular character: a person is any being that takes intentional predicates. One might object that we routinely make intentional predications of, say, cars (“My car doesn’t like the cold”), but as Daniel Dennett famously pointed out this objection doesn’t go through when we know that there is a “machine-language” explanation of the object’s behavior: I may not know enough about batteries, starters and so forth to explain my car’s failure to start in the cold, but someone else does, and that’s all I need to know to know that my “intentional” explanation is strictly figurative. But then don’t persons also have machine-language explanations?

No: my car won’t start because the battery is frozen. The mechanic does not commit any fallacy when he says, “Your battery’s the problem.” The part is not confused with the whole. It’s really just the battery. Now suppose that you are driving down the freeway searching for the right exit. You remember that there are some fast-food restaurants there, and you have a feeling that one always thinks that they have gone too far in these situations, so you press on. However you manage to do this, it is no explanation to say that you have done it because your brain remembered the fast-food restaurants, and has beliefs about the phenomenology of being lost on the freeway, and decided to keep going and so forth. That’s like saying that you had lunch because your stomach had lunch.

In fact there is not a machine-language explanation of personhood. Kant, writing in the late 1700s, is fastidious about referring to “all rational beings,” he never says “human beings”; he understands that when we are discussing the property of personhood we are discussing (what I would call) a supervenient functional property (Kant would call personhood “transcendental”), not a contingent physical property. Unfortunately Kant is programmatically intent on limiting the scope of materialism in the first place and thus fails to develop non-reductive materialism. But he understood that the mental cannot be one of the ingredients in the recipe for the mental.

Sunday, December 5, 2010

The spectrum of materialisms

By the 1950s a burgeoning physicalist ideology led philosophers to go beyond the methodological scientism of behaviorism and try to develop an explicitly materialist theory of mind. (I prefer the term “physicalist” to “materialist,” but in this part of the literature the term “materialist” is almost always used so I will follow popular usage.) This movement had everything to do with the intense flowering of technology in this period. For example, electrodes fine enough to penetrate neural axons without destroying them allowed for the measurement and tracking of electrochemical events in live brains. This immediately led to demonstrations of correlations between specific areas of the brain and specific mental abilities and processes. It seemed to be common sense that the materialist program would essentially consist of identifying mental states with physical states (of the brain): “identity theory.”

This is reductive materialism, the view that the descriptive and explanatory language of psychology can be reduced (or translated, or analyzed) into the language of neurophysiology (or maybe just physiology: the argument here does not depend on anyone holding that the brain is the only part of the body that instantiates mental states, although many have held that position. I will continue to use the word “brain” for the sake of exposition). The identity theorists couldn’t say that brain states caused mental states or somehow underlay mental states because that would still distinguish the mental from the physical. The theory had to say that what mental states really were were physical states. Reductive materialism/identity theory is common sense (albeit incorrect) materialism, and stands at the center of the materialist spectrum, with a wing on either side (I have heard philosophers refer to the two wings as the “right wing” and the “left wing,” but I neither understand that categorization nor see how it is useful. One can make arguments to the effect that either wing is the “right” or the “left” one).

The essential metaphysical problem for reductive materialism is not, in retrospect, hard to see: intentional states are multiply realizable, supervenient on their physical exemplars. (A crucial point for the larger argument of this book is that this is a problem for intentional states specifically; the following arguments do not go through for consciousness.) Since the extension of the set of potential subjects of intentional predicates is not fixable with any physical specifications, reductive materialism is “chauvinistic,” as it means to identify a given intentional state (the belief that there are fish in the barrel, say) with some specific human brain state.

Functionalism is a response to this metaphysical problem. Functionalism stresses the type/token distinction: while token-to-token identity is possible, type-to-type identity is not. That is, every token (every actual instance) of an intentional state is instantiated by some specific physical state (assuming physicalism, which technically speaking functionalism doesn’t have to do). Functionalism plus physicalism is non-reductive materialism, one of the wings of the materialist spectrum. Functionalism abstracts away from the token physical instantiations by replacing physical descriptions with functional descriptions. (Aristotle, trying to block the reductive materialism of Democritus, located this block at the level of biological description rather than psychological description, and his ideas continue to be of the utmost importance for philosophy of mind to this day.) A mature functionalist psychology, free of references to the human body, would amount to a generic set of performance specifications for an intelligent being; in this way functionalism (that is cognitive psychology, computer science, logic, robotics and other functional-descriptive pursuits) provides a “top-down” model for backwards-engineering the human nervous system itself, tunneling towards a link with the “bottom-up” (or “wetware”) researches of neurophysiology, evolutionary biology, physical anthropology etc.

Although functionalism is of great use as a heuristic it is not clear that non-reductive materialism, considered as a theory of mind, succeeds in addressing the problem of mental representation, let alone in resolving it. On the non-reductive materialist theory a given mental state, for example the belief that the fish are in the barrel, is defined as any physical state X that plays the appropriate causal role in the production of behavior, as in “Flipper is trying to upend the barrel because Flipper desires fish and X.” This formula usefully allows for the possibility that the relevant function might be achieved without the use of representations, but it doesn’t rule out the use (the existence) of representations. In failing to resolve the problem of the semantic property (or, for that matter, the problem of rationality) in favor of a physicalist semantic functionalism is something less than a full-blown “theory of mind.”

However, functionalism, or I should say the recognition of the problem of multiple realizability that motivates functionalism, does express the central problem for the other wing of the materialist spectrum. On the other side of reductive materialism from non-reductive materialism is eliminative materialism. Eliminative materialism emphasizes the possibility that a mature naturalized psychology need not be expected to provide a physical semantic of intentional states. The eliminativist argues that it is possible that the intentional vocabulary might instead be replaced altogether with a new, physical vocabulary. After all, while Zeus’s thunderbolts have been inter-theoretically reduced to electrical discharges, the heavenly spheres are not identified with anything in our contemporary astronomy. The history of science provides many examples of both reduction and elimination. The research program of cognitive science cannot just assume that the categories of traditional intentional psychology (“folk psychology”) carve the psychological world at its joints. Thus eliminativists propose the “Theory Theory,” the idea that the intentional vocabulary amounts to a particular theory about the mind, and that it is an old vocabulary that might be eliminated rather than reduced.

My uncle Ed, a devotee of corny jokes, likes to tell the one about the tourist who pulls over to ask the local how to get to Hoboken (all of Ed’s jokes are set in his beloved New Jersey). Thinking it over, the local finally says, “You can’t get there from here.” Eliminativism about the intentional vocabulary has a you-can’t-get-there-from-here problem. To say that the intentional vocabulary is subject to elimination is to say that we might talk another way. But as things stand, it can only be said of the eliminativist that they desire to show that we need not necessarily speak of desires, that they believe that “beliefs” are part of an eliminable vocabulary, and so on. For a time I thought that this merely indicated that eliminativism, like functionalism, was something less than a fully realized theory of mind, but the problem is more serious than that and we can see why by considering once again the problem of multiple realizability.

Socrates asks the young men to define justice. They try to explain the property by giving examples of just and unjust actions and of situations where justice does or does not obtain. Socrates rejects this method: examples of justice, he argues, can never be the definition of justice. Plato thinks that supervenient properties are transcendental properties. They do not emerge, somehow, from the contingent physical world (like Aristotle Plato is opposed to reductive materialism). Rather the physical world takes on intelligible form through participation, somehow, with the transcendental (I will return to Plato’s metaphysics in the discussion of the problem of rationality below). The supervenient nature of these properties demonstrates, to Plato’s mind, that they do not come to be and pass away along with their various, impermanent, physical instantiations. Plato was the first philosopher to recognize that intentional predicates supervene on multiple physical things; ultimately his argument is that souls are immortal because properties are immortal.

“Or again, if he (Anaxagoras) tried to account in the same way for my conversing with you, adducing causes such as sound and air and hearing and a thousand others, and never troubled to mention the real reasons, which are that since Athens has thought it better to condemn me, therefore I for my part have thought it better to sit here…these sinews and bones would have been in the neighborhood of Megara or Boeotia long ago” (Phaedo 98d).

Wittgenstein rejected Plato’s search for transcendent essences, but not the ineliminable nature of the intentional predicates. While Wittgenstein thinks that individual, concrete instances of uses of a word (that is, the set of actual tokens of the word) are all there is to the “meaning” of the word (“meaning” is simply use), he identifies psychological predicates with a form of life: “To imagine a language is to imagine a life-form.” Like Aristotle Wittgenstein identifies psyche with life itself, not with the “mind” (towards which he has a Humean skepticism).

In sum, what the multiple-realizability (the supervenient nature) of intentional predicates demonstrates is that they cannot be replaced with some other way of talking. We can no more dispense with “belief” or “desire” than we can with “beauty” or “justice.” These words simply do not refer to any finite, specifiable set of physical characteristics of any finite, specifiable set of physical things. At a minimum this strongly suggests that the intentional vocabulary is ineliminable. (Again, none of this holds for phenomenal predicates. They require a completely different treatment that they will get in Chapter Three.) It follows from this that intentional predicates do not refer to any “internal” states at all, which is the key to developing a natural semantic for them.

First, though, let’s finish the discussion of eliminative materialism. There are two types of eliminativism. The first is the kind I have been discussing, the kind usually associated with the name: eliminativism about intentional predicates. But we have seen that physical analysis of nervous systems has no greater prospect of eliminating intentional predicates than physical analysis of works of art does of eliminating aesthetic predicates. What physicalism does both promise and require is the elimination of any reference to clearly non-physical properties (supervenient properties are not “clearly non-physical”; what their metaphysical status is continues to be the question that we are asking).

No, the clearly non-physical property in which intentional predication allegedly involves us has been clear all along: the semantic property. The only eliminativism worthy of that mouthful of a name is content eliminativism. As Jerry Fodor has written, “I suppose that sooner or later the physicists will complete the catalogue they’ve been compiling of the ultimate and irreducible properties of things. When they do, the likes of spin, charm, and charge will perhaps appear on their list. But aboutness surely won’t; intentionality simply doesn’t go that deep.” Representation is the only game that we know is not in town (although some further discussion of Fodor, one of the most important contemporary writers on this topic and a champion of the representational theory, will be necessary below). How ironic, then, that some of the philosophers most closely associated with “eliminative materialism” are in fact very much wedded to the representational paradigm when mental representation is the one and only thing that physicalism has to eliminate in order to be physicalism at all.

Sunday, November 28, 2010

The modern history of representation

So internalized is the representational view that one can forget that it didn’t have to be this way. The history of psychology is, like all histories, full of contingencies and precipitous forks in the road. In the study of the history of Western philosophy we call the 17th and 18th centuries the “Early Modern” period, and the contemporary idea that we live in our heads, experiencing only a mental representation of the world, dates from this period. It was an incredibly fertile period for European philosophy: if we take, as most do, Descartes to be the first canonical Early Modern philosopher and Kant to be the last, the whole period is a scant 154 years (from the publication of The Discourse on Method in 1637 to the publication of The Critique of Pure Reason in 1781).

The adjective “Cartesian” literally means that an argument or position reflects the ideas of Descartes, but it has become through usage a more general term that alludes to representational theories of mind, particularly those theories that entail that we must worry about the relationship between the external world and a perceiving subject’s representation of the world – theories that “explain” perception as the formation of representations. This is not entirely fair to Descartes, who wrote in his Dioptics that it would be a mistake to take the inverted image observable on the retina as evidence that there were pictures in the mind, “as if there were yet other eyes in our brain.”

Even if the real Descartes was not someone who today we would call a Cartesian, he can certainly be held responsible in large part for the conspicuous lack of naturalism about psychology in modern philosophy: he was a metaphysical dualist, he thought that humans’ rational capacity comes not from nature but from God (notoriously he made this argument after arguing that he could prove God’s existence through the exercise of rationality), and he was a human exceptionalist who took language as evidence that humans are essentially different from the rest of the natural world. But the real “Cartesian” in the sense of the true ancestor of modern representational theory is Kant.

Kant’s explicit project was to block the naturalization of psychology. He was alarmed by what he saw as the atheistic, amoralist implications of Hume’s empiricism (implications emphasized by Hume himself). Hume’s whole oeuvre can be read as a sustained attack on the very idea of rationality: there are no “rational” proofs of anything, no “rational” reason for believing in anything. Beliefs are the product of “habituation,” the conditioning effect of regularities of experience. Thus there was no basis, on Hume’s view, for asserting the existence of God, of human freedom, or even of the human mind if by that was meant something over and above the contents (the “impressions”) of thought processes, which were the products of experience. Kant seems to have been intuitively certain that these radical conclusions were false, although he was criticized (by Nietzsche for example) for a programmatic development of foreordained conclusions.

Hume’s psychology was inadequate. Like Locke before him he thought that mental content could be naturalized if it was explained as the result of a physical process of perception: interaction with the environment was the physical cause of the impression, a physical effect. This strategy led the empiricists to emphasize a rejection of innate content, which they regarded as a bit of bad rationalist metaphysics. The problem was compounded by a failure to distinguish between innate content and innate cognitive ability. To some extent this failure reflected a desire to strip psychology down to the simplest perception/learning theory possible in the interest of scientific method, coupled with a lack of Darwinian ideas that can provide naturalistic explanations of innate traits (I will address the skeptical, “phenomenalist” reading of Hume, that I think is incorrect, in Chapter Three).

Kant saw this weakness and was inspired to develop the argument of the Critique of Pure Reason. Hume claimed that all knowledge was the result of experience. Kant’s reply was to ask, “What is necessary in order for experience to be possible?” The greatness of Kant is in his effort to backwards-engineer the mind. He is best read today as a cognitive scientist. However people forget how radical Kant’s conclusions were, and how influential they have continued to be, one way or another, to virtually all philosophers and psychologists since the late 18th century. From the persuasive argument that the mind must somehow sort and organize the perceptual input (that’s the part of psychology that the empiricists’ ideology led them to neglect), Kant goes on to argue that space, time, cause and effect relations and the multiplicity of objects are all part of the “sensible” frame that the mind imposes on our experience of the world. The world of our experience is the phenomenal world, and it is that world that is the subject of natural science; the world-in-itself is the noumenal world (and quite the bizarre, Parmenidean world it is!).

Two points are important here. First, Kant’s aim was to protect human psychology (and religion and ethics) from a godless, amoral, reductive natural science and in that he succeeded to an alarming extent. The world of natural science on the Kantian view is the world as it is conceived by the rational mind, and as such the rational mind itself cannot be contained in it. Second, Kant’s biggest contribution of all is easy to miss precisely because it is so basic to his whole line of argument: the phenomenal world is a representation, made possible by the framing structure of rational conception, just as the drawing on the Etch-a-Sketch depends on the plastic case, the internal mechanism and the knobs of the toy.

The defender of Kant will argue that the Kantian phenomenal world is not a representation at all: it is the world presented to us in a certain way. It is also only fair to point out that Kant, unlike his modern descendents, shared with Plato the view that all rational minds were identical to the extent that they were rational. Kant would not have been amused by 20th century philosophers’ pictures of a world where each language, culture and individual were straying off, like bits some expanding universe, into their private “conceptual schemes,” ne’er the twain to meet. Nonetheless Kant needs mental representation (and any conceptual schemata is representational), because he needs to protect freedom, rationality, God and ethics. Thus a deep skepticism is intentionally built in to Kant’s system (as it is not in Descartes’). While Kant is right in a great many things and any student of philosophy or psychology must read and understand him, on these two points his influence is ultimately pernicious.

I dilate on the Kantian history of the representational theory because once we see that the issues that confront us in philosophy of mind continue to be essentially metaphysical we also see that they are very old issues, and ones that connect up with many other perennial philosophical problems. Too many people in contemporary philosophy of mind and cognitive science fail to appreciate this and the discussion is very much the poorer for that. Furthermore it’s important to see that things didn’t have to be this way. The idea that we are stuck in our heads with our “representation” of the world forever mediating between us and “reality” is actually a very strange idea, but it has been so deeply internalized by so many that we can fail to appreciate how strange it is. This is something to bear in mind as we think about how modern physicalist philosophy of mind has struggled with the problem of mental representation.

Sunday, November 14, 2010

The Problem of Mental Representation

People tend to be of two minds (pun intended) on the issue of mental content. On the one hand no one can dispute that the way we talk about the mind is largely figurative. The mind is racing and wandering, it has things on it and in it, it is sometimes full and sometimes empty, it is open and narrow and dirty and right. We are used to talking this way, it is useful to talk this way (I don’t think there is anything wrong with our psychological talk), and everybody pretty much understands that this is a discourse full of “figures of speech.” The philosophically-inclined see well enough that “mind” is an abstract concept of some sort. On the other hand we have deeply internalized some of this figurative language, so deeply that one of the most central, perennial problems of epistemology is the alleged problem about the relation of our “inner” perceptions of the world to the “real world” out there, outside of our heads. Many people think that we are stuck inside our heads: a blatant conflation of the literal with the figurative.

Why is this? For one thing when we talk about the mental we must use the language that we have, and this is a language evolved for talking about the physical, “external” world of three-dimensional objects in three-dimensional space. The room has an inside and an outside, and there are things (concrete things) inside it (those chairs and tables that philosophers are always talking about). “Beliefs” and “sensations” are words that take the same noun-role as “chairs” and “tables,” and thus the grammar of the language is constantly pushing us to conceive of these mental terms as referring to some variety of concrete things. This is the sense in which Wittgenstein uses the word “grammar”: to indicate the way that language contains metaphysical suggestions that can lead to confusion. The metaphysical grammar of language is the grammar of three-dimensional objects in three-dimensional space; objects, moreover, that interact with each other according to regularities of cause and effect.

A basic confusion about the mind is that it is a kind of inner space filled with things and (non-physical) processes. It is important to see the close relationship between this pseudo-spatial conception of the mind and the problem of mental representation. Physical things and processes don’t mean anything (or, physical descriptions and explanations of the things and processes in the world don’t refer to the semantic property, only to physical properties). The concept of a symbol is essentially relational: symbols need to be interpreted. For interpretation to happen there must be an interpreter. Pictures, books and computer screens need to be looked at by someone – someone with a mind. Thus the representational model has a “homunculus” problem: in order for the symbol to work it must be read by someone, as streetlights and recipes only “work” when actual people respond to them with appropriate actions. Another way of putting the problem is the “regress” objection: if the theory is that minds work using representations, then the homunculus’s mind must work that way as well, but in that case the homunculus’s mind must contain another homunculus, and so on.

Some cognitive scientists have tried to overcome this objection by suggesting that a larger neural system of cognition can be modeled as responding to information from neural subsystems without succumbing to the homunculus fallacy, but this strategy can’t work if a “representational” theory of mind is one that posits representations as necessary for thought. A theory of mind that succeeds in naturalizing psychology will be one that shows how the “mental” emerges from the non-mental. Any theory that includes anything mental in the first place accomplishes nothing. The concept of a representation is a mental concept by definition: the verb “to represent” presumes the existence of an audience. Representation, like language, cannot be a necessary precondition for thought for the simple enough reason that thought is a necessary precondition for both representation and language (a being without thoughts would have precious little to talk about!). This is not a chicken-and-the-egg question.

There is an important discussion here with the computationalists, who think that the mind/brain is a kind of computer. If it is the representations that bear logical relations to one another (the computationalist argues), and rationality consists in understanding and respecting those relations, then rationality requires a representational (typically thought of as some sort of linguistic) architecture. If computation is formal rule-governed symbol manipulation then symbols are necessary for computation/cognition. Jerry Fodor, for example, hopes to bridge mind (intentional explanation) and body (physical explanation) by way of syntax, the formal organization of language. The idea is that all of the causal work that would normally be attributed to the content of the representation (say, the desire for water) can be explained instead by appeal to “formal” (syntactic, algorithmic) features of the representation (there is some more discussion of Fodor below).

One challenge to this computationalist (or “strong AI”) view is connectionism, the view that the mind/brain has an architecture more like a connectionist computer (also called parallel distributed processing, PDP; in the wetware literature this is the “neural nets” discussion). In connectionist computing, systems of nodes stimulate each other with electrical connections. There is an input layer where nodes are activated by operators or sensors, programming layers where patterns from the input layer can be used to refine the output, and the output layer of nodes. These connections can be “weighted” by programmers to steer the machine in the right direction. Some of these systems were developed by the military to train sonar systems to recognize underwater mines, for example, but they are now ubiquitous as the face-, handwriting- and voice-recognition programs used in daily life.

Connectionist machines are very interesting for purposes of the present discussion. They appear to be self-teaching, and they appear to function without anything that functions as a symbol. There is still the (human) programmer and there is still nothing that seems like real consciousness, but such a system attached to a set of utilities (so far, the utilities of the programmers) looks to be effective at producing organized behavior and fully explicable in operational terms.

Meanwhile, I’m not even sure that computers have representations in the first place. That is, it’s hard to see anything that functions as a representation for the computer (which is not surprising since it doesn’t look like the computer has a point of view). What makes computers interesting to cognitive science in the first place is that with them we can tell the whole causal story without appeal to representations: the binary code just symbolizes (to us) the machine state (the status of gates in the microprocessors), and we can sustain the machine-level explanation through the description of the programming languages and the outputs. Those “outputs,” of course, are words and images interpreted by humans (mostly). So even “classical” computers do have computational properties and do not have representations. Or perhaps another way to put it is that two senses of “representation” are confuted here: the sense when a human observes a computational process and explains it by saying: “See, that functions as a representation in that process” and the sense when a human claims to interpret a representation. (I will discuss computational properties as “formal” properties in the discussion of the problem of rationality below.)

The computationalist/connectionist discussion is a striking example of how little the larger discussion has changed since the 17th century. It is the rationalist/empiricist, nativist/behaviorist argument rehearsing itself yet again through the development of this technology.

Sunday, November 7, 2010

The Problem of Intentionality

In the last chapter I argued that there is no one thing to which the word “mind” refers. I argued further that there are (at least) two metaphysical problems that are still unresolved in our psychological talk; two kinds of putative mental “properties” that each, in their respective ways, resists naturalization. It may be, though, that spelling out the heterogeneity of mind is progress: for much of the dissatisfaction with operationalist theories is because of their manifest failure to give a satisfactory account of consciousness, while any straightforward materialist account of consciousness appears to run afoul of the issue of “multiple realizability” and “chauvinism.” Once we accept that we have two different topics it may turn out that our current theories are not as inadequate as they seemed; they are only more limited in their scope than we had assumed.

If this is right then one who is interested in the problem of intentionality needn’t necessarily be interested in the problem of consciousness or vice versa. What appeared to be a fairly violent doctrinal schism between the operationalists and the phenomenologists is revealed to be a mere changing of the subject. Of course if a naturalistic semantic of intelligence-predicates and a naturalistic semantic of consciousness-predicates are both necessary but neither sufficient for a complete naturalistic semantic of psychological predicates, then analyses of both semantics will have to be offered. But each semantic and its defense should be free-standing if the heterogeneity argument in the last chapter is true.

The problem of intentionality itself decomposes further into two interrelated but distinguishable problems. The first is the problem of mental representation. Symbols of any kind (including isomorphic representations like paintings and photographs and formal representations like spoken languages and computer codes) have, it seems, the property of meaning (that I will usually call the semantic property or, interchangeably, the intentional property). Symbols refer to, are about, things other than themselves (the neologism “aboutness” also expresses this property), while physical things (or things described and explained in physical terms) do not have any such property (the descriptions and explanations include only physical terms). A naturalized semantic of psychological predicates would be free of reference to non-physical properties, but even our current neurophysiology textbooks have information-processing models of nervous system function (and the popular conception of the mind is of something full of images, information and language).

The operationalist theories of mind developed by English-speaking philosophers during the 20th century are largely a response to the problem of representation, although there are a variety of conclusions: behaviorism is straightforwardly eliminativist about mental content, limiting the possible criteria for use of psychological predicates to intersubjectively observable things. Computationalism, insofar as it holds that minds are formal rule-governed symbol-manipulating systems, aims at radically minimizing the symbol system (as in binary-code machine language for example) but remains essentially committed to some sort of symbolic architecture. Functionalism proposes a psychology that is described purely in functional terms rather than physical terms, which provides for replacing representations with functionally equivalent, non-representational states, but in its very abstraction functionalism does not commit to eliminating representations (functionalism may be more of a method than a theory). In the first half of this chapter I will draw on the work of some latter-day philosophers, generally influenced by Wittgenstein, to develop a semantic of intentional predicates that not only dispenses with any references to mental representation (as behaviorism and functionalism do) but provides an account that actually rules out the possibility of mental content.

The other part of the problem of intentionality is the problem of rationality. Rationality is multiply realizable (a synonymous term is supervenient). To see what this means consider an example from another area of philosophy, “value theory” (an area that encompasses aesthetics and ethics): Say I have a painting hanging on the wall at home. This painting has a physical description, which lists all and only its physical properties: it is two feet across and four feet tall, weighs seven pounds, is made of wood, canvas and oils, is mostly red etc. Rarely, though, does anyone find these physical properties remarkable qua physical properties. Instead my visitors are likely to remark that the painting is beautiful, finely wrought, significant etc. The metaphysical problem is that these aesthetic properties cannot be analyzed into, reduced to or identified with the painting’s particular set of physical properties (notwithstanding the fact that my visitors will appeal to these physical characteristics, as in “That red tone is lovely,” when elaborating on their aesthetic judgment). The aesthetic properties surely emerge, somehow, from this particular combination of physical properties. There could be no change of the physical properties without some change in the aesthetic properties (this is the standard definition of the “supervenient” relationship). But not all objects with these physical properties are necessarily beautiful, nor do all beautiful things have these physical properties.

Rationality is a supervenient property. For example a human being, a dolphin, a (theoretically possible) rational artifact and a (probably existing) intelligent extraterrestrial all instantiate (that is, grasp and make use of) the function of transitivity (“If X then Y, if Y then Z, therefore if X then Z”). But these beings are made of various materials organized in various ways. There are no physical properties that fix the extension of the set of rational beings and so this set, like the set of beautiful things, is indefinitely large. Another way of saying the same thing is to say that there are no psychophysical laws regarding rationality, generalizations to the effect that any being with such-and-such logical capacity must have such-and-such physical characteristics or vice versa.

The problem of mental representation and the problem of rationality can be distinguished as separate metaphysical problems. We would still be confronted with the problem of rationality even if we did not subscribe (that is, if none of us subscribed) to a representational theory of mind. Nonetheless the two sub-problems should be grouped together under the general rubric of the problem of intentionality, because both are problems for the same set of psychological predicates, the intentional predicates: “believes,” “desires,” “hopes,” “fears” etc. Intentional predicates name states that apparently entail mental content, as one believes that X, fears that Y etc., and also apparently entail rationality, as it is only explanatory when I say to you of a person that he left the room because he was thirsty if we share the background assumption that, if he believes that there is water at the fountain and desires to have water then, all other things being equal, he will go to the fountain (this is commonly referred to as the rationality assumption).

Some philosophers will claim at this point that the necessity of the rationality assumption for intentional explanation blocks naturalization. The argument is that it is the propositions (“I have water to drink,” “There is water at the fountain down the hall”) that bear logical relations to one another. If these propositions are not identical to their various physical tokens then they are non-physical entities (this kind of view is often called “Platonic realism,” that is realism about non-physical entities). This argument also counts against my claim that the two problems of intentionality can be separated if it turns out that tokens of propositions are necessary for logical thinking.

A related worry that also apparently ties the two problems of intentionality together is about the causal role of content (“the problem of mental causation”): The man is running because he wants to get away from the tiger that is chasing him. If a physical description of his brain and the processes occurring there does not convey that he is being chased by a tiger, not only does it fail to provide the kind of explanation we want (we want to know the reason he is running), it also appears to fail to describe what is happening “in his own head,” since the perception of an attacking tiger is part of the cause of his action.

I think that I can provide a satisfactory response to the problem of propositions as bearers of logical relations, although the result is somewhat surprising in the context of the overall physicalist project of this book. However the problem of mental representation will be discussed first, because it is important to see that even if we were to reject the representational theory of mind (as I think we should) we would still be confronted with the problem of rationality. The question of rationality takes us a good deal further into general metaphysics.

Sunday, October 24, 2010

The Heterogeneity of Mind

In 1949 Gilbert Ryle published The Concept of Mind, one of the most important books of philosophy of mind of the last century and probably the best manifesto of philosophical behaviorism. Although today few would endorse Ryle’s strictly behaviorist semantics of psychological predicates the book continues to be persuasive as a sustained attack on what Ryle calls “Cartesian” theories of mind. Specifically Ryle challenges the ancient intuition that the word “mind” refers to some one, unanalyzable thing. He does this more thoroughly (and in a grander style) than anything I can do here, but he wrote at a time when the practice of metaphysics was out of favor in the English-language philosophical world. Today we enjoy the benefits of the “language” philosophy that was done by the early 20th century empiricists and the benefits of the revival of metaphysics, which has been to some extent motivated by the emphasis on philosophy of mind, of the past several decades.

Imagine, Ryle asks, that a visitor has asked to be shown the university. One walks the visitor through campus: “There is the Student Center, and there is College Hall, and those young people sitting around the fountain over there are students, and there is old Professor Whiskers, you can set your watch by his walks across campus, and say hello to my friend Imelda here, she’s our new dean, now come I’ll show you the library,” and so on. At the end of the day the visitor is asked what he thought of the university. “But,” he protests, “You didn’t show me the university. We only saw buildings, people, books and things like that.” Ryle argues that a similar “category mistake” is made when we posit, behind or above or in addition to specific, observable behaviors, a “ghost in the machine,” a “mind.” He further argues, echoing Hume, that there is no “inner mental space” where mental events occur. His very title, “the concept of mind,” telegraphs his view that “mind” is a heterogeneous concept.

A heterogeneous concept is one that turns out, under analysis, to consist of multiple, distinguishable things. Ryle points out that the grammatical behavior of nouns is such that we can be led to think that there is something that exists when there is nothing (Dickens’s Mr. Pickwick, for example), but that this is practically speaking the same thing as thinking that only one thing exists when in fact the concept involves many things (Dickens, one of his novels, the tradition of fiction; football players, uniforms, equipment). All I mean by "analysis," that I am not using in any sort of technical manner, is thinking about the referents of the term (semantics and metaphysics often come to the same thing). Examples of heterogeneous concepts from outside of philosophy of mind are value terms like "ethics" or "beauty," or for that matter very many abstract nouns such as (opening the dictionary randomly) "reservoir." Wittgenstein famously explained the heterogeneous nature of the concept of “game.”

Heterogeneous words are common (really, I don’t like to use the word "concept," although it is hard to avoid. It comes with a treacherous load of academic baggage. I'm thinking about the uses of the word; the nature of the “concept” is what is at stake, after all). We can understand the continuity of meaning between "That man's reservoir of good will" and "The city's reservoir of water" (the first use started as a simile of the second) but if we are thinking about what the word refers to the two uses are different enough that it makes most sense to say "'Reservoir' is a heterogeneous word," meaning that it is a word that can refer to multiple, distinguishable things.

If we stay alert to the fact that individual nouns, and particularly abstract nouns, routinely turn out to refer to distinguishable things we can sometimes clear the smoke away a bit from philosophical arguments. For example ethical theorists (perhaps not the best ethical theorists, but quite a few ethical theorists) might see themselves as involved in some sort of partisan contest: are the “rights theorists” correct (or better or what have you), or are the “consequentialists” the ones who are giving us the best account of things? Or maybe virtue theory is preferable to both? Certainly philosophers working on ethical theory are frequently identified as “rights theorists” or as “consequentialists”: “I’m a consequentialist” is taken to mean not only that one endorses consequentialism but also that one declines to endorse the other types of ethical theory on offer.

But wait: actual people are "ethical" on a formal, logical sort of level (respecting others' rights through applying the logic of universality) and "ethical" on a situational, emotional sort of level (minimizing felt harm through the capacity for empathy) and they appreciate "good" people who they estimate to be salutary examples of a well-realized person (a “gentleman of Athens”). In fact real ethical people (that is, people when they're actually trying to act ethically rather than merely trying to do ethical theory) use Kantian-style "golden rule" reasoning and Millian outcomes-oriented strategies and they make Aristotelean evaluations of themselves and others all at the same time. “Ethics" turns out to be a heterogeneous concept: the intentions of rational beings, the qualitative experiences of conscious beings and the health or pathology of living beings are all different things, such that there turn out to be not so much differences of opinion among "ethical theorists" as there are changings of the subject. Confusion (and sound and fury) is generated by a presumption that ethical thinking must be one kind of thinking and so there must be one “theory” that gives an account of it. The misleading grammar in this case is the use of a singular abstract noun “ethics,” which creates the strong impression that there is only one topic when in fact there are several that come under that rubric.

The alarmed ethical theorist might speak up at this point: “Too fast.” When David Hume says “Reason is the slave of the passions,” he is making the substantial claim that logical operations are secondary and merely instrumental and that qualitative experience is the primary explanans of “ethics.” When Kant argues that all and only rational beings constitute a “kingdom of ends” he is making a substantial claim that the physical universe portrayed by science (the “phenomenal world”) is valueless qua physical, and that transcendental logical necessity is that explanans. These look to be mutually exclusive claims, and neither is compatible with Aristotle’s view that fulfilling the telos of a living human being is ultimately the aim of “ethical” behavior.

And mutually exclusive they are. But the claim that experience is the only thing we know, or the claim that there are no values in the physical world studied by science and that therefore they must come from somewhere else, are metaphysical and epistemological claims. All philosophy is about metaphysics and epistemology, as unfashionable as it may be to say so these days. And Hume (about whom I will have a good deal more to say in Chapter Three) points out the curious fact that no amount of discussion of physical experience produces any account of programmatic duty, while Kant is moved by his sense of the amorality of the physical world to make the radical claim that the phenomenal world is not, could not be, all that there is. The penultimate difference between Hume and Kant is a difference about the nature of the human mind; like all of the best philosophers their views on both ethics and psychology are systematically motivated by more central positions on epistemology and metaphysics. So if there is a persuasive argument that mind is a heterogeneous concept that argument will extend to the claim that ethics is a heterogeneous concept.

The deeply-internalized intuition that there is some one thing that is the “mind” reflects the plain fact that there is one thing that is the body. For each person the body is singular (at least in our experience!), and once the idea emerged that the mind existed separately from the body (or, at least, that the mind was metaphysically distinct from the body) it was natural to think that there was a one-to-one correspondence between bodies and minds (or “souls”). But the burden of proof is surely on those who would maintain that psychological predicates refer to some one, unanalyzable thing. The metaphysical dualist points to the difficulty we have in providing a naturalistic semantics for psychological terms as a justification for accepting dualism, but we have already seen that the intentional terms and the phenomenal terms resist naturalization in different ways: we might eventually be forced to accept a dualist account of the intentional mind but not of the phenomenal mind, or vice versa, so even a convincing argument for dualism wouldn’t entail that psychological predicates refer to something homogeneous.

As for phenomenal arguments about the unity of perception, apperception, consciousness or what have you, “unity” is exactly what one would expect if one held that in the final analysis psychological predicates referred to embodied beings in physical environments. Kant, one of the greatest and richest philosophers in this field, has to work hard on his account of the unity of mind because he does endorse just the distinction between the rational mind and the conscious (that is physical-world-experiencing) mind that I am stressing here, he doesn’t think that the rational mind can be naturalized and he does think (he fears) that the conscious mind can be. (Strictly speaking Kant’s famous distinction between the “noumenal” and the “phenomenal” worlds is epistemological – the world of experience is that part of the world-in-itself that our minds can feature in a representation – but if rationality is assigned to the noumenal and sensory experience is assigned to the phenomenal then the distinction is equivalent to the one I am making here.) If there were persuasive natural semantics available for both types of psychological predicate (contra Kant who thinks there can be none for intentional predicates) then the “unity of mind” would have been shown to be simply the unity of body: to claim that mind is unanalyzable prima facie is to beg this question.

There is one more objection that cannot be avoided, this one from familiar arguments in the area of personal identity. A defining argument in the area of personal identity is that between advocates of physical continuity and advocates of psychological continuity. At least since the time of Locke the majority view has been that psychological theories of personal identity are more persuasive than physical theories. Imagine (the story goes) that one’s mind has been switched with another (physical) person’s: mind A in body B and mind B in body A. Where (one asks the students) are you now? Most people have the intuition that they go where their mind goes, that is, that they are their mind as opposed to their body if forced to make the choice. It is significant that it does seem possible to conceive of one’s mind separated from one’s body. Isn’t that a problem for any physicalist theory of mind? I think it is, and I will take up the issue of what it is actually possible to conceive, and what that possibility might show, in Chapter Three in the discussion of the “absent qualia” arguments, the possibility of “zombies” etc.

But what is at issue in this section is not the mind/body problem itself but the ground-preparing question of whether there are two problems rather than one. Consider the “memory theory” owed to Locke himself. On this view shared memories are the psychological link that establishes the continuity of self across the passage of time (the old general remembers the brave officer’s battle, the brave officer remembers the young boy stealing the apple and so forth). But if the operationalist holds that memory is a representational system that gains, edits and stores information, this functional ability is not sufficient to constitute selfhood: two beings with the same database are not thereby the same person. And if the phenomenologist is right that no amount of functional description will ever capture the quality of conscious experience then there can be no purely functional account of memory itself, let alone of personal identity.

If, on the other hand, we have a phenomenal account of memory continuity – that would have to be something like “having memories with the identical qualities” – then we get the problem in reverse, since we cannot establish the causal role of consciousness (which is just another way of putting the phenomenologist’s point that we cannot provide a functional account of consciousness). So a phenomenal account of memory (whatever that might be) would also not be sufficient if used to try to establish personal identity. Identity of representational content and identity of qualitative experience are both necessary, but neither is sufficient, for personal identity. Since the reason that neither account of memory is sufficient is that each leaves the other one out that establishes that they are two different things.

To summarize, my claim is that there are two metaphysical problems for the naturalization of psychology. My method is to look at the metaphysical commitments – the semantics – of the vocabulary of psychological predication. This vocabulary divides into two sets of words. First there is the intentional vocabulary. This consists of words like “belief,” “desire,” “hope,” “fear” and so on. Use of these words appears to commit us to the existence of rationality and mental representation; I will use the word intelligence to refer to the intentional mind in toto. The other set is the phenomenal vocabulary. This consists of words like “sensation,” “pain,” “taste,” “texture” and so on. Use of these words appears to commit us to the existence of consciousness. Operationalist theories are theories about intelligence; phenomenal theories (which are rather thin on the ground, for reasons I will discuss in Chapter Three) are theories about consciousness.

Once one sees that there are two mind/body problems, not one, it is possible to address each problem in turn. Chapter Two breaks down the problem of intentionality further, developing the distinction between the problem of mental representation and the problem of rationality, and offers two respective arguments to naturalize the semantics of intentional predicates. Chapter Three offers arguments to the effect that the problem of phenomenology is a pseudoproblem and then explains how phenomenal predicates can be naturalized as well. The arguments in the two chapters are different responses to different metaphysical problems, but taken together they may work towards a naturalistic semantic for psychological predicates. In the more speculative Chapter Four an account of the nature of the relationship between intelligence and consciousness is proposed that reflects the conclusions of the earlier chapters.

Sunday, October 17, 2010

Consciousness: the other horn of the dilemma in philosophy of mind

I take Turing’s thought experiment to be entirely persuasive, with the radical and happy outcome that, among other things, it reveals the old epistemological chestnut “the problem of other minds” to be a pseudoproblem (Wittgenstein emphasizes this). There is another famous gedanken-style argument in the philosophy of mind that I find equally persuasive, owing to John Searle: the Chinese Room Argument. I found both the Turing Test and the Chinese Room Argument to be rather fast and baffling at first, and then I went through a period of doubt and resistance, but I cannot find any argument that shows either of them to be fallacious or misapplied (and many, many have tried). I now feel certain that they are both correct. The only problem is that they are mutually contradictory.

Imagine, Searle asks, a person in a room. The room has a slot where people outside the room can enter printed notes and another slot where he can put out notes in response. This person cannot read or speak Chinese. He has two things: a large cache of Chinese characters (maybe he has a Chinese-character typewriter), and a set of instructions. The instructions are purely formal: for each Chinese character or set of characters that comes in to the room, there is specified a character or set of characters to be put out. Chinese-speakers write notes and put them into the room: “What is the capital of France?” say, or “What is your favorite food? Mine is chocolate.” or “I plan to vote for Obama, but my brother disagrees.” The person in the Chinese Room examines the characters, finds them in the instruction manual, and prints out the responding characters that are specified there. The instructions are such that the Chinese-speakers are satisfied that they are conversing with an intelligent being, one that knows something about geography and any number of other topics and can converse about food, politics, relatives and so on.

According to Turing, the Chinese-speakers (and everyone else outside the room) would have to conclude that the Chinese Room was intelligent. In fact the Chinese Room just is intelligent (no inference is necessary) since, on the operationalist view, “intelligence” consists of nothing more nor less than this kind of intelligent behavior; there is no question of being wrong here. On the contrary, Searle argues that the Chinese Room knows nothing. Neither the person in the Room nor the Room as a whole has any idea what the topic is, or even that there is a topic: not even that the characters mean anything at all. The Chinese Room is, according to Searle, a formal rule-governed symbol-manipulating device and nothing more, and as such it knows nothing at all, and nothing that knows nothing can be considered “intelligent.” A thing lacking all awareness is not an intelligent thing.

Searle’s specific target is computationalism, the view that (human) cognition is a form of computation, in other words that intelligent humans are formal rule-governed symbol-manipulating systems. He doesn’t think that an intelligent artifact is impossible, because he’s a materialist: he accepts that an artifact with the same relevant causal properties as a human body would have the same kind of intelligence. It’s just that a computer is not that artifact. A computer can have a data-base as full of symbolic representations (words, pictures) about Paris as you like, but it is only the human user who can grasp what the symbols represent. And what is that? Cheese shops full of hard parmesan and soft camembert, well-dressed people whizzing by on motor scooters, cigarette butts stuck in the metal grid floors of the Metro: a specific place full of sounds, smells, textures, tastes and scenes.

The taste of the wine, the smell of the cigarettes, the feeling that the well-dressed people don’t admire your ensemble: these are conscious experiences. Humans have them, computers do not. Only beings who have conscious experiences (who are, that is, conscious) can know what a symbol stands for, because “knowing” consists of an appreciation of the quality of the relevant experiences. A human doesn’t even have to have been to Paris to get some feel for the place; they can read about it on their computer screen! No amount of increase in the computational power of a mere formal rule-governed symbol-manipulating device will be sufficient for understanding absent this capacity for qualitative experience. This capacity is consciousness.

But how is it that consciousness is a metaphysical problem? Here is another famous gedanken-style argument, this one owed to Frank Jackson, which makes the metaphysical nature of the problem clear. Imagine Mary, a color-blind color-vision specialist. Mary is an expert on the science of color perception. This involves a great deal of scientific expertise: Mary knows about the physics of light, for example about how red light has a spiky amplitude and blue light a flat one; she knows about the light-absorbent and –reflective properties of surfaces; she understands the way the rods and cones on the back of the retina measure the amplitude of light and accordingly stimulate the optic nerve; she knows about the visual cortex and how the cells are arranged and connected there. Let’s say Mary is the world’s foremost color-vision specialist. Let’s even idealize Mary a little bit: let’s say that she is in possession of the complete and correct physical description and explanation of color vision, from the physics of light to the neurophysiology of perception. She knows all there is to know, and she’s got it all right.

Mary is color-blind. She has never seen a blue or red surface, only blacks, grays and whites. That is, she doesn’t know what colors look like. Sadly, she does not have the capacity for the relevant qualitative experience (I’ve always suspected that Mary has over-compensated for her disability in the pursuit of her chosen career). If this is right, then a complete and correct physical description and explanation of experience is lacking some information: what it is like to see colors (to use a phrase made famous by yet another exponent of the problem, Thomas Nagel). Now we have another putative mental “property,” and like the semantic property it appears to be unanalyzable into physical properties. There is even a noun, quale (singular of qualia), that denotes these qualitative feelings: the quale of this bite of chocolate I’m taking is this particular taste-sensation that constitutes my being conscious of the chocolate in my mouth. Conscious experience consists of qualia and qualia are not analyzable into, identifiable as, or reducible to physical properties.

Thus psychology cannot be naturalized. There is something called phenomenal description (the description of the quality of experience) that necessarily is always distinct from physical description. The study of experience qua experience is called phenomenology, but I will call the metaphysical problem, following the usage in contemporary philosophy of mind, “the problem of consciousness.” This is the subject of Chapter Three. There is a close connection between this problem as it is framed by contemporary philosophy of mind and the much older philosophical problem of the possibility of a radical difference between our experience of the world and the world as it actually is. In modern philosophy it is more common to put this as an epistemological problem (for example in the literature of skepticism). Both the English-language phenomenalists and the Continental phenomenologists of the early 20th century wanted to put metaphysics behind them, but I will maintain that progress here can only be made in the context of an explicitly metaphysical discussion. Nor would my conclusions be congenial to philosophers of that era: I will argue that the phenomenalists were in the grip of a disastrous misinterpretation of Hume and that phenomenology is impossible.

Like most people I tend to be drawn towards symmetry. Alas, Chapters Two and Three do not have symmetrical arguments. Whereas I break the problem of intentionality down into two constituent problems, the problem of representation and the problem of rationality, and offer positive theories to handle both, I will argue that the problem of consciousness is in fact a pseudoproblem and thus not amenable to (or in need of) any “theory” at all. Nonetheless even if one is persuaded, as I am, by the argument that the problem of consciousness is a pseudoproblem it turns out that there still remains something to say about metaphysics and consciousness and that discussion forms the second part of Chapter Three.

Philosophy of mind finds itself, at the beginning of the 21st century, to be at something of an impasse. For much of the 20th century operationalists had an agenda stable enough and productive enough that they were able to basically ignore the challenge of the phenomenologists, although the rejection of behaviorism as a popular psychology, after a long battle from Aldous Huxley’s iconic Brave New World through B. F. Skinner’s incendiary Beyond Freedom and Dignity, made the problem clear enough. (A crucial exception was Wittgenstein, but I will save that discussion for Chapter Three.) Gradually the dam broke and by the end of the 1980s thanks to Searle, Jackson, Nagel and others the post-“Analytic,” English-language philosophy of mind community acknowledged the problem of qualia as a central problem, and today one of the most thriving branches of the field, quite at home with the scientific neighbors in the area of “cognitive studies,” is “consciousness studies.” I will call those who take the problem seriously the “phenomenologists” although no doubt some will think that term comes with too much baggage; I ask the reader’s indulgence for the sake of exposition.

These new phenomenologists quickly set about demonstrating the inadequacy of functionalism and operationalist approaches in general as comprehensive theories of mind. For any qualitative experience (any quale) that appears to have a causal role in the production of behavior, the argument goes, one can conceive of a being with the functionally equivalent behavior but not the quale (a number of these “absent qualia” arguments, while mostly to the same point, are important enough to get their own discussion in Chapter Three). This might seem to be more of a problem for the advocates of phenomenology than it is for the advocates of operationalism but the opposite is true: if a functionally complete description and explanation of a person lacks any description or explanation of consciousness then functionalism is in the same position as Jackson’s Mary gedanken appears to put physicalism in general: it is not a complete theory of mind. In the literature this is often tagged as the “zombie” problem: the zombie is the allegedly conceivable functionally-complete but consciousness-lacking person.

The phenomenologists, for their part, have often accepted that the problem of consciousness does indeed thwart the naturalization of psychology, just as their older Continental namesakes did (although with considerably less enthusiasm). For example there is a well-developed line that a “property” dualism is inevitable, a kind of epistemological dualism that does not commit one to actual metaphysical dualism. I don’t think so: I think that metaphysical physicalism entails epistemological physicalism, on the grounds that that is the only possible significance of such a metaphysical assertion. There is a group that calls itself the “mysterians,” who argue that we just have to concede that there is no accounting for the relationship between the physical and the phenomenal. And one of the most noted writers on the topic in recent years, David Chalmers, had considerable success with his suggestion that metaphysical dualism is the right theory after all (admittedly the suggestion is made in a Berkelean spirit: we should just concede metaphysical dualism and move on). An exception to these various consuls of despair is Searle, and that is another discussion elaborated in Chapter Three. But with exceptions the phenomenologists find themselves with an apparent refutation of operationalist theories but without a coherent theory of their own.

The book you are reading is titled The Mind/Body Problems; the aim of the title is to draw your attention to the plural. The next section is, I think, straightforward, but it is one of the most important sections of the book.

Sunday, October 10, 2010

The first horn of the dilemma in contemporary philosophy of mind

We are put onto the horns of our current dilemma by good arguments, not bad ones. The first line of argument at the heart of contemporary philosophy of mind is exemplified by Alan Turing’s work and his “Turing test,” although perhaps the most important elaboration of the line is that found in the writings of Ludwig Wittgenstein, and the whole approach has its roots in the empiricism of David Hume. Hume argued that we were on firm ground when we could specify experiences that grounded our descriptions of and theories about the world. Hume identified “metaphysics” with the traditional, pre-empiricist philosophy of the “Schoolmen,” as he called them, and he is a typically modern philosopher in that he imagined that he had done away with a great deal of traditional philosophy altogether; at least, that was his aim. He understood that this radical empiricism had radical implications for psychology: he denied that there was anything that could be called the “mind” other than the bundle of perceptions and thoughts introspection revealed, and questioned whether anything that could be called the “self” (other than the perceiving and acting body) could be said to exist, for the same reasons. The “mind” and the “self” were for Hume too close in nature to the “soul,” a putative non-physical entity of the sort that the Enlightenment empiricist wanted to eliminate along with angels and ghosts.

The early 20th century heirs to Hume were the behaviorists. Too often today behaviorism is regarded solely as a failed movement in the history of 20th century psychology, but it is important to appreciate that behaviorism was an attempt, and a very powerful, respectable and still-interesting attempt, to naturalize psychology. It is also important to see that the motivation for developing behaviorism for the empiricist-minded philosophers and psychologists of the time was essentially metaphysical. The ghostly mental entities, figuratively located “in the head,” that were the nominal referents of psychological descriptions and explanations (“beliefs,” “desires,” “attitudes,” etc.) had to be washed out of the ultimate, natural semantics. Behaviorism proposed to naturalize psychology in a simple way: stick to a strict empiricist methodology. If the methodology of science was adhered to, ipso facto psychology would be a science. For present purposes “behaviorism” can be defined as the view that psychological predicates (“He believes that Boston is north of here,” “She is hungry”) refer in fact to observable dispositions to behave: behaviorism is a good example of “theory of mind” as semantics of psychological language.

Behaviorism is a full-blown theory of mind (a general semantics for the psychological vocabulary) that eliminates any reference to anything “in” the mind. On one interpretation this is simply a methodological prohibition on psychologists who aspire to being “scientific” from referring to these “inner” (that is, unobservable) mental states and processes. This version is variously called “soft,” “methodological,” “psychological” or (my coinage) “agnostic” behaviorism. A more radical interpretation is that the inner is an illusion, a historic misconception. This more radical version, the leading avatar of which is Wittgenstein, is variously called “hard,” “metaphysical,” “philosophical” or “atheistic” behaviorism. I don’t want to get sidetracked here by the complicated story about behaviorism’s varieties and the varieties of problems and objections behaviorism encountered. Just now what we need is to grasp and appreciate what was powerfully persuasive (and enduring) in the empiricist line of theory of which behaviorism is an example.

Alan Turing, thinking about computation and computing machines, took a behaviorist approach to the word “intelligence.” He famously proposed the “Turing test”: when an intelligent, sane and sober (that is, a somewhat idealized) person, interacting with a machine, can no longer see any difference between the outputs of said machine and the outputs of an intelligent (etc.) person, at that point we will have to concede that the machine is (actually, literally) intelligent as well. Machine intelligence will have been achieved. “Outputs”: the Turing test is usually conceived as a situation where there are a number of terminals, some connected to people, some to machines. Human interlocutors don’t know which are which. Questions are asked, comments are made, and the terminals respond; that is, there is linguistic communication (there is actually an annual event where this situation is set up and programmers enter their machines in competition). Turing himself never saw a personal computer, but he was conceiving of the test in roughly this way.

However, “outputs” could be linguistic, or behavioral (imagine a robot accomplishing physical tasks that were put to it), or perhaps something else (imagine an animated or robotic face that made appropriate expressions in response to peoples’ actions and statements). Nor does the candidate intelligent thing need to be an artifact, let alone a computer. I am following Turing in sticking to the deliberately vaguer word “machine” (although it’s true that Turing theorized that intelligence, wherever it was found, was some species of computation). Imagine extraterrestrials that have come out of their spaceship (maybe we don’t know if they’re organisms or artifacts), or some previously unknown primate encountered in the Himalayas, say. The point is that in the case of anything at all, the only possible criteria for predicating “intelligence” of the thing are necessarily observation-based. But after all, any kind of predication, psychological or otherwise, is going to depend for its validity on some kind of observation or another (“The aliens are blue,” “The yeti is tall”), and psychological predicates are no different.

Wittgenstein gives perhaps the most persuasive version of this argument in what is usually called his “Private Language Argument.” Wittgenstein holds that language is necessarily intersubjective. (In fact he thinks that it is not possible for a person to impose rules on their self, so ultimately he thinks that a private language is impossible, but we don’t need to excavate all of the subtleties of the Private Language Argument to see the present point about the criterion of meaningfulness, which is fairly standard empiricist stuff). If I say to you, “Go out to the car and get the blue bag,” this imperative works because you and I have a shared sense of the word “blue.” Without this shared, public sense communication is impossible, as when people are speaking a language that one can’t understand. Psychological words, just like any other kind of words, will have to function in this intersubjective way: there will have to be some sort of intersubjective standards or other for determining if the words are being used correctly (the two of us have to be following some shared set of rules of use). Wittgenstein emphasizes the point that, to the extent that psychological predicates are meaningful at all, they cannot be referring to anything “inner,” known only to the subject of predication. And for all of the problems and failures of the original behaviorist movement, it is hard to see anything wrong with this central point.

The term of art for any theory of mind that says that psychological words necessarily have to conform to publically, intersubjectively established standards and procedures of use in order to make sense is operationalist. Behaviorism is a kind of operationalist theory, and so is functionalism, to which I now turn, so I need the word “operationalist” to use when I want to refer to these kinds of theories of mind in general. Operationalist theories appear to handle some critical problems in the philosophy of mind, and constitute the first horn of our dilemma.

Functionalism can be defined as the view that psychological predicates refer to anything that plays the appropriate causal role. That’s a bit gnostic so I will unpack it with some history. Remember that according to Turing there is no difference between a human and a machine qua intelligent being once the machine’s intelligent performance is indistinguishable from the human’s. Acting intelligent, on an operationalist view, is just being intelligent, just as sounding like music is just being music. “Being intelligent” breaks down into many (perhaps indefinitely many) constituent abilities. For an easy example take learning and memory. Part of being intelligent is being able to learn that there are people in the house, say, and to remember that there are people in the house. Both an intelligent human and an intelligent machine will be able to do this. But the human will do it using human sensory organs and a human nervous system, while the machine will have physically different, but functionally equivalent, hardware.

This is the problem of the multiple realizability of the mental. It is one of the deepest metaphysical issues in the philosophy of mind. Around the middle of the 20th century philosophers of mind concluded that a literal reductive materialism, for example the identification of a specific memory with some specific physical state in a human brain, or of remembering itself with some specific physical process in human brains, committed a fallacy often referred to in the literature as “chauvinism.” These philosophers weren’t the first to see this: Plato and Aristotle, for example, not only saw this problem but developed some of the best philosophical analyses of the issue that we have to this day. I want to stress that once we accept any kind of operationalist theory, the problem of multiple realizability is undeniable. Humans, dolphins (among other animals), hypothetical intelligent artifacts and probably-existing intelligent extraterrestrials will all take common psychological predicates (“X believes that there are fish in the barrel,” say, or “X can add and subtract”). In fact the extension of the set of beings who will take psychological predicates is indefinitely large and does not appear to be fixed by any physical laws.

Functionalism, like behaviorism, is motivated by essentially metaphysical concerns, in the case of functionalism by the problem of the multiple realizability of intelligence. Functionalism abstracts away from hardware and develops a purer, more formal psychology: any intelligent being, whatever they may be made of, whatever makes them tick, will have (by definition) the ability to learn, remember, recognize patterns, deduce, induce, add, subtract and so forth. Although the more enthusiastic advertisements for functionalism like to point out (rightly enough, I suppose) that functionalism, in its crystalline abstraction, is even compatible with metaphysical dualism, functionalism is best understood as a kind of non-reductive materialism. That is, while the general type “intelligent beings” cannot be identified with any general type of physical things, each token intelligent being will be some physical thing or another.

This extends to specific mental states and processes as well, of course: the human, the dolphin, the Martian and the android all believe that the fish are in the barrel, they all desire to get to the fish, and they all understand that it follows that they need to get to the barrel. Each one accomplishes this cognition with its physical body somehow, but they all have different physical bodies. There is token-to-token identity (that’s the “materialist” part), but there is no type-to-type identity (that’s the “non-reductive” part). It is not coincidental that functionalism has been the most influential theory of mind in the late 20th century, the age of computer science. The designer (the psychologist) sends the specifications down to the engineers (the computer scientist and the roboticist): we need an artifact with the capacity for learning, memory, pattern recognition and so on. The engineers are free to use any materials, devices and technology at their disposal to devise such an artifact.

This realization that functional descriptions do not analyze down to physical descriptions (a realization at the center of Aristotle’s writings) is a great advance in philosophy of mind. It changes the whole discussion of the metaphysics of intelligence and rationality in a decisive way. In Chapter Two I will argue that operationalist theories in general can indeed provide an intuitively satisfying naturalistic semantics for predications of cognition, intelligence and thinking. To close this introductory discussion of the first horn of the dilemma I will quickly sketch the way operationalist theories also can be deployed to address another core metaphysical problem, the problem of mental representation and mental content. Then I will be able to define one of the most important terms in this book and one of the most difficult terms in philosophy of mind: “intentionality,” the subject of Chapter Two.

“Representational” theories of mind hold that it is literally true that cognitive states and processes include representations. To some this may seem self-evident: isn’t remembering one’s mother, for example, a matter of inspecting an image of her “in the mind’s eye”? Isn’t imagining a tiger similarly a matter of composing one’s own, private image of a tiger? There are reasons for thinking that mental representations must be formal, like linguistic representations, rather than isomorphic, like pictorial representations: How many stripes does your imaginary tiger have? Formal representations, like novels, have the flexibility to include only relevant information (“The Russian guerillas rode down on the French encampment at dawn”), while isomorphic representations, like movies, must include a great deal of information that is irrelevant (How many Russians, through what kind of trees, on horses of what color?). While there are those who argue for isomorphic representation, most representational theorists believe that mental representations must be formal rule-governed sets of symbols, like sentences of language. The appeal of such a model for those who want to approach cognition as a kind of computation is obvious.

Some of these issues between species of representational theory will be developed in Chapter Two, but for introductory purposes four more quick points will suffice: First, why mental representation/content poses a metaphysical problem; second, how we can define the often ill-defined word “intentionality”; third, which psychological words are taken by representational theorists to advert to mental content; and finally, how operationalist theories might be successful in addressing the metaphysical problem of representation.

The metaphysical problem is that symbols per se seem to have a “property,” the property of meaning, which does not appear to be analyzable as a physical property. This issue is addressed in philosophy of language, but language and other symbol-systems are conventional (albeit the products of long evolutionary processes); the location of the ur-problem is in philosophy of mind. Consider the chair in which you sit: it does not mean anything. Of course you can assign some arbitrary significance to it if you wish, or infer things from its nature, disposition and so forth (“All of the world is text”), but that doesn’t affect the point: physical objects in and of themselves don’t mean anything or refer to other things the way symbols do. Now consider your own, physical body: it doesn’t mean anything any more than any other physical object does. Nor do its parts: your hand or, more to the point, your brain, or any parts of or processes occurring in your brain. Your brain is just neural tissue humming and buzzing and doing its electrochemical thing, and the only properties included in our descriptions and explanations of its workings are physical properties. But when we predicate of a person mental states such as “He believes that Paris is the capital of France,” or “She hopes that Margaret is at the party tonight,” these mental states appear to have the property of referring to, of being about, something else: France, or Margaret or what have you. It looks, that is, like the mental state has a property that the physical state utterly lacks.

I can now offer a definition of “intentionality.” In this book, intentionality refers to two deeply intertwined but, I will argue, separable metaphysical problems: 1) the problem of the non-physical property of meaning that is implicit in any representational theory of mind (I will call this “the intentional property” or sometimes “the semantic property”), and 2) the problem of rationality, that is, the apparent lack of any physical parameters that could fix the extension of the set of beings that take predicates of rationality (or intelligence). The intentional vocabulary consists of words like “belief,” “desire,” “hope,” “fear,” “expectation,” “suspicion,” the word “intention” in its ordinary use etc. Psychological predication using these words is often called “intentional psychology” or “belief/desire psychology” or sometimes (usually pejoratively) “folk psychology.” The intentional vocabulary consists of all and only those words that appear to entail mental representation, often referred to in the literature as “that-clauses,” as in {A belief that “Paris is the capital of France”}, or {A hope that “Margaret will be at the party tonight”}.

On a widespread representationalist view these are propositional attitudes, in the respective examples the belief that the proposition “Paris is the capital of France” is true and the hope that the proposition “Margaret will be at the party tonight” is true. It is commonly suggested that, since these intentional states are individuated by the content of the propositions towards which they are attitudes, propositions must be represented somehow in the mind. Such a view commits one to the existence of the non-physical “property” of meaning. This is not (or at least not entirely!) an abstruse argument amongst philosophers: any model of the nervous system as an information-processing device makes this commitment, and the most cursory perusal of standard neuroanatomy textbooks is enough to see that they are saturated with this kind of language.

On my view naturalizing psychology requires that putatively non-physical “properties” be washed out of the final analysis in favor of solely physical properties (the only kind there are). That is, I think that representational theories of mind are false. To use the term of art in theory of mind, I am an eliminativist about mental representation and content. Mental representation will be the main topic of the first part of Chapter Two, which in many ways is the heart of the book. To conclude this introductory section I will briefly sketch how operationalist theories of mind might open the way toward an acceptably naturalistic semantics of the intentional vocabulary.

Behaviorism is also a kind of eliminativist theory: behaviorism eliminates (from the semantic analysis of the psychological vocabulary) anything unobservable at all, including private “inner” mental states and processes. Functionalism, behaviorism’s more sophisticated progeny, acknowledges that states and processes “in the head” (that phrase may be taken either literally or figuratively here) play causal roles in the production of behavior (“The thought of X reminded him of Y and he started to worry that Z…”), but still manages to rid the analysis of psychological predication of reference to mental states (to intentional states, in the present case). It does so by describing cognition functionally rather that physically. Take any sentence that includes an intentional phrase, say: “At the sight of his mother’s photo he remembered the crullers she used to bake, and this memory motivated him to go to the grocery and buy sugar, butter and unbleached flour.” The representationalist is, it would seem, committed to the view that a representation of the crullers is playing a causal role here. But a functional description of the cognitive process can substitute a generic functional-role marker thus: “At the sight of his mother’s photo he X’d, and this X motivated him…etc.” Now “X” can stand for anything that plays the appropriate functional role, and obviously this no longer commits us to the existence of representations or of anything else with non-physical properties.

As I said, the two problems of intentionality (the problem of rationality and the problem of mental content) are further separable. In Chapter Two I will first develop a naturalistic semantics for intentional predication, one that is eliminativist about mental content. Then I will offer a second argument about the problem of rationality that relocates the metaphysical problem outside of philosophy of mind. Both of these arguments acknowledge the validity of the operationalist maxim exemplified by the Turing Test: outside of some formal, intersubjective standards for identifying intelligence through public observation there can be no justifiable reasons for predicating intelligence of a being or for refusing to do so.