Sunday, June 19, 2011

A Short Introduction to the Philosophy of Artificial Intelligence

This is a rough draft, I participated in an interdisciplinary class and I'm thinking of submitting to maybe "Teaching Philosophy."

I. The historical background

AI is not only a rich source of new technology produced by interdisciplinary syntheses. It also, in its theoretical component, is an extension and elaboration of some of the central, canonical debates about “intelligence,” “mind” and “rationality” that have defined philosophy and psychology for hundreds of years. Specifically we find ourselves participating in the conversation that dates back to the “Early Modern” period of philosophy, roughly the 17th and 18th centuries, between so-called “Rationalists” (Descartes, Spinoza, Kant) and so-called “Empiricists” (Locke, Berkeley, Hume). The Rationalists, impressed by humans’ apparently unique ability to formalize mathematics and logic, held that the human mind was endowed with innate abilities and knowledge, and that these abilities could not be understood using the methods of natural science (these views were anticipated by Plato). The Empiricists of the 18th century Enlightenment, eager to develop a naturalistic account that integrated humans into nature, proposed a simplified psychology that essentially saw the mind as a learning machine and concentrated on perceptual psychology and learning theory. (Nowadays historians of philosophy tend to see the Rationalist/Empiricist distinction as a bit overstated, as we can see in perspective that they were all discussing the same set of issues with many of the same premises.)

An important product of this Early Modern discussion, introduced by Descartes in the first half of the 1600s (Descartes 1637) but crystallized by Kant at the end of the 1700s (Kant 1789) was the representational theory of mind. According to this view the mind works by constructing a representation of the world; Kant developed the idea of a

“conceptual framework” such that our “picture” of the world was as much a product of our own innate mental structure as it was of our perceptual experiences. Thus the issue of mental representation is an essential issue in the elaboration of the nativist/learning theorist divide as it plays out across the 19th and 20th centuries. For example, the behaviorists of the early 20th century are nothing neither more nor less than Humean empiricists: they applied “operationalist” ideas from the philosophy of science to try to develop a psychology that was cleansed of any reference to unobservable, “internal” mental “states,” including representations (mental content). On the other side the phenomenologists of the same period advanced the thoroughly Kantian argument that the study of the structure of experience would always necessarily stand apart from physical science. (Here we can stop and notice an even deeper root: the medieval question of the duality of the body and the soul.) In the middle of the 20th century the “nature/nurture” debate, as this same set of issues was then called, was of central importance in debates about the social sciences in general, a central battleground of the “culture wars” of the 1960s and 1970s. The nativist/learning theory divide also shaped the 20th century ethological literature about the mental lives of non-human animals.

II. Computation and representation

The issue of representation is central to contemporary debates about models of computation. In fact the theory of computation is yet another version of the same argument that constitutes the theory of the social sciences and the theory of ethology. Alan Turing in 1936 introduced his “Turing machine,” a thought-experiment that showed that a simple machine could instantiate any algorithm of mathematics and logic. This was a seminal moment not only in the development of computers but also in the course of artificial intelligence research. For the next fifty years many in the cognitive science community and the public at large saw “artificial intelligence” as just synonymous with computer science. Two crucial points here: first, to understand what is happening in artificial intelligence research today it is necessary to understand the computationalist era, because what we are currently living through is a departure from that era. Second, computationalism, as conceived by Turing and others, required representation: classical computation is rule-governed symbol-manipulation.

At this point we can consider some basic premises of linguistics. The classical computationalist view reached its apotheosis in 1975 with the publication of Jerry Fodor’s The Language of Thought. Noam Chomsky had launched what seemed for a time a devastating attack on behaviorism with his critique of B. F. Skinner’s 1957 book Verbal Behavior and Chomsky’s subsequent Aspects of a Theory of Syntax (1965). Chomsky argued that a syntactical structure (a grammar, or set of rules for constructing sentences and statements) was generative (it could generate novel linguistic representations and therefore novel thoughts), and was thus necessary for higher-order thought (this argument led to the sign-language research with chimpanzees of the 1960s-80s). This was, as Chomsky himself stressed, Cartesianism in a new bottle.

Fodor applied these ideas to cognitive science in general. Any representational theory of mind requires a symbolic architecture: this is simply the material instantiation of the symbols: the pixels in the computer screen, the ink marks on the page, the sound-compression waves caused by vibrating vocal chords, the chalk marks on the board. If the nervous system is a symbol-manipulating system then there must be a material instantiation of the symbols as part of the physical structure of the system. Fodor proposed that syntactical structure (the program, if you will, of the brain) could account for the causal role of the seemingly semantic mental content. This arch-computationalist view took it as axiomatic that the mind/brain necessarily involved representations.

III. Computers and the brain

Computers are our own creations, so their workings are not mysterious to us. The same thing cannot be said of the brain. Each age draws on the current technology as a metaphor/theory about how the brain works: the 17th century physicalist Thomas Hobbes, for example, drew heavily on hydraulics in his discussion of the mind. He speculated that memory might be a kind of vibration, as in a spring, that lost coherence as other vibrations passed through. In our time it is commonplace to speculate that the brain is a kind of computer and that a computer is a kind of a brain. However there are two very different approaches to developing this idea.

Classical computation is based on codes (programming languages) that contain explicit instructions for the transformation of states of the machine. The actual “machine language” is binary code (this is the meaning of “digital”). The symbolic architecture in a traditional computer is located in the “chip.” This is a series of gates that might either allow or block an electrical impulse to pass through. Thus the “1s” and “0s” of digital codes stand for actual physical states of the machine. If the human brain is also a system that functions through instantiating representation than the goal of cognitive science is to uncover the machine language of the brain: to make the connection between the psychological description of the subject and the actual physical state of the nervous system.

The brain does, in fact, possess physical features that lend themselves to a theory of symbolic architecture similar to that found in digital computers. The brain is a massive assemblage of individual neurons that interact with each other through the flow of electrical impulses (“cascades”). The impulses do not pass arbitrarily, of course; the brain shows immense organizational complexity. But essentially one neuron or group of neurons will, upon being “lit up” by a cascade of electricity, either send the event onward to the downstream neurons of fail to do so, and this can be seen as the “1/0” analog. What’s more, between neurons there is a space, the synaptic cleft, which contains a soup of neurotransmitters that buffer the electrical connection (they can be more or less conductive). So instead of an “on/off” potential, like a light switch, there is a gradient potential, like a volume control. This vastly increases the potential number of physical states of which the brain is capable. All of this constitutes a non-arbitrary reason for thinking that the brain may indeed function like a traditional computer: the synaptic pattern could be the symbolic architecture of the brain just as the disposition of the gates in the chips is the symbolic architecture of the computer.

However a new generation of computer models now challenges classical computation and its axiom that representation is necessary for computation. In this new generation of research, computers are actually modeled on brains while at the same time the new computers are contributing to new insights into how brains themselves work. This movement is sometimes referred to as “parallel distributed processing” and as the “neural net model,” but it has come to be popularly known as “connectionism.”

Classical computation has some limiting and apparently intractable problems. As anyone who has worked with computers knows, they are insufferably single-minded. This is natural, as they can only do what they are told to do by their programmers; “garbage in, garbage out.” One of the central problems for traditional computers is the “framing problem.” Consider any homonym, for example “bank.” An ordinary human has no trouble during conversation distinguishing between the two senses in sentences like “I was laying on the bank of the river” versus “I made a withdrawal from my bank.” Traditional computers are strictly limited in terms of contextualizing. This is because computers don’t actually know anything. They are devices for manipulating symbols and nothing more.

What’s more, traditional computers can’t learn anything new. They know what they are told. Now, remember the Rationalist/Empiricist debate. The Rationalists thought that there was an innate conceptual structure, incarnate in language, of essentially mathematical and logical principles, and this structure (the mind, or soul) was the source and basis of rational behavior. The Empiricists argued that a naturalistic psychology required that there be nothing more than an ability to learn from experience on the basis of trial and error, and were skeptical of non-physical states and entities. Connectionist computer models are empiricist approaches to computing in the same way that behaviorism is an empiricist approach to psychology. Connectionist machines do indeed show some primitive ability to learn on their own; they function (ideally) with no recourse to internal codes or representations; and they are solidly based on basic principles of evolutionary biology.

Connectionist machines function, as brains do, by forming patterns of activation. An input layer of nodes are electrically stimulated and this layer accordingly stimulates some number of “hidden,” internal layers which ultimately stimulate the output layer. Activation potentials can be weighted in various ways but the basic mechanism is the number of nodal connections which can constitute a threshold for downstream activation:

(Insert figure of simple connectionism: input layer, hidden layer, output layer)


This technology underlies handwriting-, voice- and facial-recognition functions that are now commonplace (an original application was for submarine sonar submarine-recognition and missile-recognition). This is achieved through trial-and-error. A trainer adjusts the activation potentials to increase correct outputs and to extinguish incorrect ones. This process does not require any internal symbolic content.

Here it is useful to note that Darwin’s model of evolution as outcomes-based selection over random variation is very much a product of empiricism. In fact Darwin was reading the Scottish Enlightenment economist Adam Smith’s 1776 Wealth of Nations, with its account of larger economic structures formed from the bottom up through iterations of economic exchanges between autonomous, self-interested individuals when he was developing his account of natural selection (Darwin 1859). An important distinction between the Rationalist program and the Empiricist one is that Rationalists tend to see complex systems as organized from the top down whereas Empiricists see complexity as emerging from the bottom up. The distinction between classical computation and connectionist computing mirrors this distinction.

However the field of AI is moving in even more radical directions. Although modern cognitive scientists will obviously disavow Cartesian dualism about the mind and the body, in a sense the Cartesian model has often been simply transposed into a brain-body distinction. On a common view it is the brain that is (now) the “cognitive theater,” the seat of representations, the CPU where thinking takes place: the same role Descartes assigned to the res cogitans (Hacker). This view underlies the assumption that AI research is simply an extension of computer science. That collective assumption is now collapsing.

IV. Robotics

On a representational model, “beliefs” and other mental states are instantiated in the form of mental content: language, images and so forth “in the head.” As I said, this is recognizably a continuation of a kind of Cartesian dualism. Indeed representational models are essentially dualistic if representations are taken to have semantic properties that are not analyzable as physical properties (this is one of a number of philosophical issues that I went into to some depth in the class). An alternative view is that psychological predicates are predicated not of brains but of whole persons.

Stomachs don’t eat lunch. People eat lunch. True enough that one needs a stomach to eat one’s lunch, but it doesn’t explain how a person eats lunch to say, “Their stomach eats lunch for them.” Brains don’t think. They don’t imagine, dream, solve problems or recognize patterns. People do those things, just as people believe, desire, hope, fear, etc. In fact, committing this mereological fallacy – the fallacy of confusing the part with the whole – obstructed our ability to learn what it is that brains actually do. We were sidetracked by the misconception that brains are little people in our heads.

“Embodied cognition” is the name given to a recent movement in cognitive science that rejects representational models of thought. The idea is that “thinking” is an activity that is distributed over the whole body. This movement has been in a particularly fertile dialectical relationship with robotics. (Not surprisingly this community has developed some excellent internet resources where students can see footage of robots in action.) It is clear enough that the future of AI lies as much with the field of robotics as with the field of computer science. What is important in an interdisciplinary context is to see the underlying, and quite old, philosophical considerations that make that clear. This also presents an opportunity to discuss the history and philosophy of science.

9 comments:

  1. An excellent introduction for those not familiar with artificial intelligence!

    Recent developments suggest quantum computation may play a role in intelligence and indeed consciousness. While very skeptical, Penrose et al. have explored this idea to some credit. This might be worth mentioning. Otherwise, a very informtive read.

    ReplyDelete
  2. I prefer Artificial Intelligence over the vast majority of DNA’s Artless Superficial Intelligence.

    ReplyDelete
  3. Hi, Mitchell Monaghan here,

    To distinguish mind--thought etc-- from body is just as dualistic as to distinguish thought, mind, from the brain.
    This is the problem: to create two entities such as mind and body, and define them as different and separate from each other and yet cannot be separated, since they are defined by reference to each other---is to make reconciliation impossible.
    Either the mind will be reduced to brain or the brain to mind in order to escape the ambiguity caused by the mutual exclusivity.
    Either one or the other must be eliminated because where one begins and the other ends is impossible to determine.
    It is impossible because the two entities are defined as mutually exclusive.
    And behind this is the notion that reality is one thing --one entity--and so one set of statements only must apply and only one point of view.
    It is fundamentally a position of realism---that the correct words and the actual thing, are the same --and yet words are not the same as the thing itself---and this is just another aspect of the mind body split.
    It is this confusion caused by words, mind versus world or body
    --or thought versus thing--that will have us all chasing our tails for millenia.
    It all comes of holding that there may be only one set of statements that may be valid, which comes from holding that there is a thing and the word for the thing and making no distinction between the two; holding that the actual thing and the word are the same in some way.
    In other words--a kind of blind realism.
    It also comes from not acknowledging that premises are premises--are in some sense arbitrary presumptions that must qualify any conclusion.
    Mind and body and their split is such a premise and presumption.
    Why not just say---"this is what we have concluded upon these premises" ? Because it is held that there can only be one reality
    and so one set of words for that reality--since the correct words and the reality must be the same thing.
    But people differ, and if it is acknowledged that our descriptions make up our reality at least in part--then it seems one can just as easily conclude that there are many realities--not just one.
    I am with Feyerabend on this, that there is nothing wrong with multiple perspectives. There are already such differences between different sciences,and it will, I am convinced, become more common within sciences in future.
    The attitude that considers it intolerable that Quantum Mechanics and Relativity lie unintegrated will, I think, diminish.

    ReplyDelete
  4. The transcendentally real, animist venture weaved its way through the aeons, to this modernity, where only the names of its proselytizers - AI, robotics, computation, brain-modelling - have changed.

    Those not swayed by this ancient movement may notice that there is something oddly unproductive or impotent about it. Its presentations appear as pantomime, an empty show. Does a sequence of start and stops really amount to a "computation"? is the voodoo doll any different from the robot? - at least the voodoo doll was supposed to have some innate power. And does the brain really "work"? for who or what? Again, is information "out there"? as opposed to what?

    There is a type of person who has always regarded these technical endeavours with vague suspicion. They see the supporters of these endeavours baptise their tools with a quasi-mystical force that sets them apart from the tool user. Yet, at the end of the day, the baptism is empty, and nothing more mysterious or substantive materializes than the common man's efforts to get things done, to put away his tools, and dine at table.

    ReplyDelete
  5. What always got me about the notion that we represent the world---the "thing in itself" being hidden from us---is that whatever we come up with is a representation
    and not the world itself. But that means that we can never confirm whether the "thing in itself" exists or not, since we never get past the representation; and further---we can't even confirm that there is or is not such a thing as a "thing in itself"--since all we ever have is a representation and never the thing itself. Thus, the "thing in itself" is purely an assumption, a premise.
    To put it another way: any representation cannot be the world and consequently the very scheme and notion --of a world that is represented but can't be seen directly --is itself a representation and so can't be the world in itself. So, what are we talking about? Seems a self cancelling notion to me.
    Far as I can see, the whole thing is a muddle.
    And what kind of foundation is that to build upon?
    Seems to me a what see is what get might be a better approach; that is, what is seen or sensed or perceived, is what there is, subject to subsequent modification or discard.
    That seems to me a better description of what we humans actually do. If we hear a sound like a duck--we assume it is a duck, subject to later (a little later or much later)alteration or even discard.
    Let the robot identify it as a duck too--subject to later change.
    Better to assume, I think, that the world is not hidden from us but is patent, blatantly patent, since,in my view,all we have for the world is what shows up to us-----what else is there? And any subsequent modification of what shows up to us---is just more stuff that shows up!
    Seems to me that a robot intellect needs as simple an approach as possible--so
    some sort of Kantian scheme,I think, does not fit the bill.

    ---Charles Myro

    ReplyDelete
  6. @Charles, Yes I agree completely. The right view, I think, is Hume's, which is the one you outline: it makes no sense to talk about the "real" world. This helps to understand what Berkeley is doing: exactly the same thing.

    ReplyDelete
  7. Really interesting read, I will most definitely be coming back to this blog regularly.

    ReplyDelete
  8. I'm sorry, please clarify. You undoubtedly know that stomachs DO eat lunch, as do the cells of the stomach and the mouth, throat and whole body.

    So, is the label of dualism a straw-man?

    The parts participate in the mysterious 'teleos' of the whole, as your body thinking, of which the brain is a part as recognized through symbology.

    All that seems necessary for symbology to obtain meaning is ambiguity of embodiment. (Perhaps between the part and the whole.)

    ReplyDelete
  9. Very interesting analysis how artificial intelligent and brain correlates in some aspect.
    What seems sound about the function of brain, how it respond, develop logical analysis, deeper mathematical computation and philosophical abstraction is the approach of empiricists.
    Sir! I am committed as of now to be regular visitor of your insightful blog.

    ReplyDelete