Consider Hilary Putnam’s “Twin Earth” argument. Imagine, the argument goes, a Twin Earth: one that is molecule for molecule identical to this Earth. Of course you will have a Twin Earth doppelganger, molecule for molecule identical to yourself. On a reductive materialist account, granting that you have physically identical brain states it follows that you have psychologically identical mental states. For example you will have identical beliefs about water, the colorless, odorless substance that is ubiquitous in your parallel worlds. But imagine that there was just one difference between Earth and Twin Earth: on Earth water is composed of H20, but on Twin Earth water is composed of XYZ. Now everything about you and your doppelganger – both your physical states and your mental contents – are identical, but they’re not the same. Your beliefs may be correct and his/hers false (he/she may believe, as you do, that water is H20), but at a minimum they are about different things. Thus the actual meaning of an intentional state cannot be determined either by its physical properties or, startlingly, by its mental properties (its causal properties that can only be explained with reference to its contents).
Another gedanken from Putnam: Imagine an ant is walking in the sand, behaving normally for an ant for the usual ant reasons. It leaves a trail that resembles a portrait of Winston Churchill. Is this a representation of Winston Churchill? No it is not, even though it has the same physical properties as an actual sketch of Winston Churchill: how it came to be is relevant to its status as a representation. Something is not a representation by virtue of resemblance alone: Winston Churchill, after all, is not a representation of a sketch of himself or of an ant trail that looks like him.
Putnam’s famous slogan is “meaning just ain’t in the head.” The meaning of an intentional state is determined not only by physical and mental properties intrinsic to the subject (“in the subject’s head”) but also by facts about the environment and the subject’s relationship with the environment. This view is called “externalism,” and is also referred to as the “wide content” view: the view that intentional predicates refer to relationships between the nominal subject of intentional predication and their environment.
The representational theory of mind is deeply entrenched and some disambiguation is necessary to avoid outlandish interpretations of externalism and its claims. In this book I am examining the meanings of psychological predicates; in this chapter, intentional predicates. So for me the “meaning” in question is the meaning of words like “believes,” “desires,” “hopes,” “fears” and so on, in the sense of ontological reference: what ontological commitments do we make when we use these words? When Putnam says that “meaning isn’t in the head,” he isn’t talking about the meaning of the word “belief” (my topic), he’s talking about meaning in the sense that the alleged proposition that is the alleged object of the attitude is supposed to be about something: the proposition is the thing that is about something, on the representational realist view, and the proposition is represented “in the head.”
It is a consequence of externalism that beliefs (or any intentional states) aren’t about anything at all. That’s not how they work. They’re not even about the world, let alone internal representations of the world, because what they actually (ontologically speaking) are are relationships between persons and their environments, and relationships aren’t about anything, any more than physical objects or dispositions to behave are about anything. And that is exactly what naturalization requires: that there is no longer any reference to anything that “means” anything, meaning being a non-physical and therefore non-existent “property.” On a functional-role semantics words themselves aren’t about anything, not even proper nouns, so they can’t serve as magical vessels of meaning as they do on a traditional view. Language is a way that persons have of doing things - whole embodied persons situated in particular environments.
Here I am going beyond Putnam (undoubtedly an unwise thing to do!), because Putnam helps himself to mental content even as he argues that mental content alone is not sufficient to determine meaning. But if meaning, understood as “wide content,” turns out to be a description of relationships between persons and environments then there cannot be any mental content. There are no propositions transcendently emanating from Platonic heaven whether or not some person adopts an attitude towards them. There are only specific, concrete instances of states of affairs (I will use “states of affairs” as a more economical way of saying “relationships between persons and their environments;” this is consistent with the more standard use of the phrase as referring to ways the world could be) and goal-directed, token incidents of language-use.
Is Santa Claus a problem? Can’t one be thinking about Santa Claus even though Santa Claus is not part of the environment? No, Santa Claus is not a problem because Santa Claus is a cultural convention and that counts as part of the environment. One is thinking, in the Santa Claus case, not about a mythical character that is actual because mythical characters are not actual. That’s what “mythical” means. As for misrepresentation (as in the case of the five-year-old who believes that Santa Claus is actual), this is something that can only be demonstrated operationally.
What about a person or creature or what have you that exist only in the imagination of one individual? A stranger who appears in a dream, say? Here the right response is to remind ourselves that we are talking about the semantics of intentional predication, not about private experience. In fact the argument is that there can be no public description of private experience; remember Wittgenstein’s beetle-in-the-box. Note that, to the extent that “interpreting representations” is thought of as a process with a phenomenal component (after all, mustn’t there be “something that it’s like” to interpret a representation?), the putative experience of a representation (can one interpret a representation without experiencing it?) cannot be the criterion for proper intersubjective use of the word (see the discussion of phenomenal predicates in Chapter Three).
But surely dreams are evidence of mental representation? Aren’t dreams, in fact, just direct experiences of mental representation? No: although mental processes often involve experiences that seem similar to inspecting representations, remember that there is no explanatory value in literally positing mental representations. They don’t help to explain dreaming any more than they help to explain perceiving, remembering or imagining. In fact they make the model of the mental process considerably more complicated and difficult; a good reason for denying them. Berkeley thought that to clear up the Lockean mess of properties of objects-in-themselves, properties of objects to cause perceptions, and properties of perceptions, either the “mental” or the “external world” had to go. On that point he was right.
A little bit more disambiguation: my intentions here are perhaps deceptively arcane. I am focused on the metaphysics of the mind-body relationship. When I argue that, metaphysically speaking, there are in fact no such things as “mind” or “meaning,” I am arguing that traditional notions of those concepts are currently misleading us in our efforts to understand how the nervous system works. I don’t think any radical change in the way we talk is called for. In fact it is my view that a great deal of our psychological talk is ineliminable, and I think this goes for “mind,” “reference” and “meaning” just as much as for “belief” and “desire” and “beauty” and “justice,” and for the same reasons. If one accepts the present argument against mental representation, it still makes as much sense as it ever did to ask “What are you thinking about?” or “What is that book about?” or “What does that word mean?” The metaphysical question is about the proper semantic analysis of the way that we have always talked; that is nothing like a critique. As Wittgenstein said, “Philosophy changes nothing.” If the present proposal that intentional predicates pick out relationships between embodied persons and their environments is sound then that has always been what we have been doing. Jettisoning realism about mental representations is a substantial matter for cognitive science – I would stress the importance for developing experimental paradigms in brain science – but it’s hard to see how it could have any effect on popular usage of intentional terms.
To summarize, intentional predicates are applied to whole persons in particular environments, not to brains or nervous systems or neural processes or any physical part of persons. Perceiving, imagining, thinking, remembering and so on are the kinds of things that whole persons do. Among these intentional activities is interpreting. Symbols are interpreted by persons, and thus symbols must be located where persons are located: in the world. It follows that language, a symbol system, is a feature of the person’s environment as well: there are no symbols in the head. There is not, ontologically speaking, any such thing as meaning: there are only particular acts of persons negotiating their environments with use of sounds and symbols (and, following the mereological fallacy argument, this will be true of all language use including idle musings, random thoughts etc). The use of the word “meaning” as applied to intentional predication (“What is he thinking about?” “What does she know about gardening?” etc) is partially constituted by facts about the environment (pace Putnam). The natural semantic for intentional predicates is that they refer to relationships between individuals and their environments. If the account given here is persuasive there cannot be any such things as “mental representations.”