Think of a tiger. Alright: now, how many stripes does your
imaginary tiger have? Probably your
“mental image” of a tiger turned out not to have a specific number of stripes. Representational theories of mind hold that
it is literally true that cognitive states and processes include
representations as constituent parts. To
some this may seem self-evident: isn’t remembering one’s mother, for example, a
matter of inspecting an image of her “in the mind’s eye”? Isn’t imagining a tiger similarly a matter of
composing one’s own, private image of a tiger?
But there are good reasons for thinking that mental representations must
be formal, like linguistic
representations, rather than isomorphic,
like pictorial representations. Formal
representations, like novels, have the flexibility to include only relevant
information (“The Russian guerillas rode down on the French encampment at
dawn”), while isomorphic representations, like movies, must include a great
deal of information that is irrelevant (How many Russians, through what kind of
trees, on horses of what color?). While
there are those who argue for isomorphic representation, most cognitive
scientists believe that mental representations must be rule-governed sets of
symbols, like grammatical sentences of language. “Cognitivism,” while a broad term, can fairly
be defined as the view that cognition is an internal (neural) kind of
information processing using symbols: that is the cognitivist’s model of brain
function.
Noam Chomsky is perhaps the
most influential founder of cognitivism which at the time (late 1950s-early 1960s)
was conceived as a challenge to the dominant behaviorism. Cognitivists like Chomsky argued (as
Descartes had 300 years earlier) that language distinguished humans from
non-human animals in such a way that, unlike non-linguistic animals, humans
could not be modeled as stimulus-response learning mechanisms as (very roughly)
the behaviorists maintained. Chomsky,
critiquing B. F. Skinner’s account of “verbal behavior,” made the enormously
influential proposal that formal syntactical structure was “generative”:
grammatical frameworks like “The_is_the_” allowed for multiple inputs and thus
for the generation of indefinitely many (linguistic) representations. Thus human speech was not “tropic”: the
tropic calls of non-linguistic animals were determined by natural history and
environmental stimuli, but human speech was, through its generative nature,
liberated from these natural determinants.
According to Chomsky, this generative grammar was the cognitive
foundation that enabled humans to have mental lives of a qualitatively unique
sort.
The early, canonical Chomsky under
discussion here held that while the behavior of non-linguistic animals could
indeed be explained solely through appeal to behaviorist learning models and
biological determinants, the behavior of humans could not. (In recent years Chomsky has modified this
view somewhat, but mostly by acknowledging that some non-linguistic animals
have conscious experiences. Consciousness will be discussed in Chapter
Three. I am not aware of Chomsky
anywhere repudiating his original position that non-linguistic animals cannot
be said to have intentional states.) Taken to its extreme this argument appears to
show that it is necessary for a being to have an innate (“genetic”) syntactical
system for generating propositions in order to be in an intentional state at
all (this is what was at stake in the famous primate sign-language research,
which was initiated by behaviorists seeking to debunk Chomsky by showing that
grammar could be learned by non-linguistic primates).
When Bertrand Russell coined the
phrase “propositional attitude” in his 1921 book The Analysis of Mind he wasn’t thinking of “proposition” in the
sense of a piece of language. He was
thinking that what was represented was a situation or what would today most
likely be called a “state of affairs,” a way the world (or some part of the
world) could be. The latter-day
cognitivists took a much more literal view of propositional attitudes as
linguistic entities. Their claim was
that the syntactical structure of language was, in fact, the basis of
representation in the first place. On
the cognitivist conception of intentional states (beliefs, desires, etc) all
intentional states can be described as attitudes towards propositions. By “attitudes” here one means attitudes
towards the truth values of
propositions: To believe that the drinking fountain is down the hall is to have
the attitude towards the proposition “The drinking fountain is down the hall”
that it is true, and to have a desire for water is to have the attitude towards
the proposition “I will drink water soon” that one intends to make it true,
hopes it to be true and so on. Propositional
attitudes are understood as attitudes towards content that can be expressed in
“that”-clauses: one has a belief that
“The cookies are in the jar,” a hope that
“There is milk in the fridge,” a fear that
“Mom will say no to eating cookies,” and so on.
The defender of propositional attitudes
argues that intentional states can only be individuated by virtue of their
respective contents. What makes Belief X
different from Belief Y is that X is about Paris, say, and Y is about something else. This looks like a block to reduction: to
correlate electrochemical activity in the nervous system with Belief X, we must
already be able to specify which belief Belief X is (for example by
asking an awake subject what they are thinking while their brain is being
simultaneously scanned). We don’t have
any way of getting from no-content to content (from the non-mental to the
mental). This motivates the contemporary
version of the problem of mental causation: it appears that the content (the meaning)
of the proposition is what plays the causal role in the production of behavior:
when told to proceed to the capital of France
he went to Paris because he believed that “Paris is the capital of France.” All the explanation in physical
(neurophysiological) terms one could possibly make wouldn’t be explanatory if
it didn’t at some point reveal the meaning that is expressed in the
proposition, and it doesn’t: “He believes that Paris
is the capital of France”
is not shorthand for a causal chain of neurophysiological processes. This is an argument for the ineliminable role
of the semantic property: “intentional realism” is the view that mental
representations are an ineliminable posit of cognitive science.
Donald Davidson famously pointed out a
further problem for the development of “psychophysical laws” (as he called
them), laws that systematically identified brain processes with particular
contents of intentional thought: no one
propositional attitude could ever suffice as the discrete cause of a behavior
because the causal implication that the propositional attitude has for the
acting subject necessarily emerges from the logical relations that that
“attitude” has with all of the other
intentional states of the subject (this is the point where the problem of
representation and the problem of rationality connect). Davidson’s phrase for this was “meaning
holism,” the view that meaning (inasmuch as meaning can be posited as something
playing a causal role in the production of behavior) is a property of networks of propositional attitudes, not
of individual ones. There is not an
assortment of individual intentional states that constitute a person’s mental
content such that one or another might be identifiable as the sole cause of behavior (although one
might be the proximate cause): each
person has an intentional economy, if you will, and our immediate reasons for
acting are the running product of the unfolding logical relationships among
this network of propositional attitudes.
Finally, getting down to ontological
bedrock, some philosophers argue that since physical entities and processes
don’t have semantic properties (the properties of meaning something, of having
truth-values and of having logical relations with each other) then there must
be some other sort of entities that do.
They nominate propositions, considered now as matter-independent,
mind-independent entities. This dualist
suggestion is usually called “Platonic realism” about propositions. Propositions, according to this view, are not
identical to their corresponding sets of physical tokens (sentences). The idea is that, as with proofs of
mathematics, the entire set of physical tokens of a given proposition (written,
spoken or otherwise) could fail to exist without effecting the existence of the
transcendental, non-physical entity that is the proposition itself. These philosophers invite us to consider all
of the propositions that have never been uttered or written, or even thought:
aren’t they in some sense nonetheless “there,” just as there are certainly many
still undiscovered proofs of mathematics?
On this view propositions, like math proofs, are eternal, unchanging,
mind- and matter-independent non-physical entities that possess the
non-physical (semantic) properties that the propositional attitude model of
intentional states (or for that matter any representational theory of mind)
entails.
Those of us with an interest in
metaphysics are willing to at least consider this sort of suggestion on its own
terms. Hard-headed experimental
cognitive scientists, on the other hand, perhaps already impatient with the
very idea of metaphysical problems as such, may feel that now this discussion
has gone too far: “We’re not Platonic dualists, for heaven’s sake!” But it’s not obvious that representational cognitivists
can really distance themselves from this kind of anti-naturalism (patent
dualism in fact). The representational
theorist of mind needs representations precisely because representations, and
only representations, are the kinds of things that could possess semantic
properties. “No computation without
representation,” as Jerry Fodor put it with his characteristic impishness.
The representational cognitivists,
then, appear to be at least tacitly committed to the position that intentional
(semantic) properties are real and non-physical, and this position entails
further ontological commitments. Of
course anyone is free to embrace some kind of dualist ontology if that is what
they are convinced the world is like. But
I, for one, will opt for some kind of monist ontology if that is one of the
plausible options. I would sooner try to
eliminate non-physical properties from the theory of mind than grant the
existence of the non-physical entities that must exist to have them if it turns
out that we can do without the original non-physical properties in the first
place. And while one may pursue the satisfactions
of philosophy for their own sake the fact is that these ontological
difficulties need to be worked out if psychology (or at least cognitive
science) is ever to be naturalized.
In any event, one can summarize the
standard representational cognitivist view as holding that representation is necessary
for intentionality (it’s the representations that mean something and this semantic content plays an ineliminable
causal role), that syntactical structure (formal rules of composition) are
necessary to generate representations and that, therefore, the central project
of cognitive science is the investigation of this syntactical structure. In the late 20th century this
cognitivist paradigm supported a great flowering of work in theoretical
psychology (much as the behaviorist paradigm had supported a great flowering of
experimental psychology in the earlier decades of that century).
No comments:
Post a Comment