Sunday, October 24, 2010

The Heterogeneity of Mind

In 1949 Gilbert Ryle published The Concept of Mind, one of the most important books of philosophy of mind of the last century and probably the best manifesto of philosophical behaviorism. Although today few would endorse Ryle’s strictly behaviorist semantics of psychological predicates the book continues to be persuasive as a sustained attack on what Ryle calls “Cartesian” theories of mind. Specifically Ryle challenges the ancient intuition that the word “mind” refers to some one, unanalyzable thing. He does this more thoroughly (and in a grander style) than anything I can do here, but he wrote at a time when the practice of metaphysics was out of favor in the English-language philosophical world. Today we enjoy the benefits of the “language” philosophy that was done by the early 20th century empiricists and the benefits of the revival of metaphysics, which has been to some extent motivated by the emphasis on philosophy of mind, of the past several decades.

Imagine, Ryle asks, that a visitor has asked to be shown the university. One walks the visitor through campus: “There is the Student Center, and there is College Hall, and those young people sitting around the fountain over there are students, and there is old Professor Whiskers, you can set your watch by his walks across campus, and say hello to my friend Imelda here, she’s our new dean, now come I’ll show you the library,” and so on. At the end of the day the visitor is asked what he thought of the university. “But,” he protests, “You didn’t show me the university. We only saw buildings, people, books and things like that.” Ryle argues that a similar “category mistake” is made when we posit, behind or above or in addition to specific, observable behaviors, a “ghost in the machine,” a “mind.” He further argues, echoing Hume, that there is no “inner mental space” where mental events occur. His very title, “the concept of mind,” telegraphs his view that “mind” is a heterogeneous concept.

A heterogeneous concept is one that turns out, under analysis, to consist of multiple, distinguishable things. Ryle points out that the grammatical behavior of nouns is such that we can be led to think that there is something that exists when there is nothing (Dickens’s Mr. Pickwick, for example), but that this is practically speaking the same thing as thinking that only one thing exists when in fact the concept involves many things (Dickens, one of his novels, the tradition of fiction; football players, uniforms, equipment). All I mean by "analysis," that I am not using in any sort of technical manner, is thinking about the referents of the term (semantics and metaphysics often come to the same thing). Examples of heterogeneous concepts from outside of philosophy of mind are value terms like "ethics" or "beauty," or for that matter very many abstract nouns such as (opening the dictionary randomly) "reservoir." Wittgenstein famously explained the heterogeneous nature of the concept of “game.”

Heterogeneous words are common (really, I don’t like to use the word "concept," although it is hard to avoid. It comes with a treacherous load of academic baggage. I'm thinking about the uses of the word; the nature of the “concept” is what is at stake, after all). We can understand the continuity of meaning between "That man's reservoir of good will" and "The city's reservoir of water" (the first use started as a simile of the second) but if we are thinking about what the word refers to the two uses are different enough that it makes most sense to say "'Reservoir' is a heterogeneous word," meaning that it is a word that can refer to multiple, distinguishable things.

If we stay alert to the fact that individual nouns, and particularly abstract nouns, routinely turn out to refer to distinguishable things we can sometimes clear the smoke away a bit from philosophical arguments. For example ethical theorists (perhaps not the best ethical theorists, but quite a few ethical theorists) might see themselves as involved in some sort of partisan contest: are the “rights theorists” correct (or better or what have you), or are the “consequentialists” the ones who are giving us the best account of things? Or maybe virtue theory is preferable to both? Certainly philosophers working on ethical theory are frequently identified as “rights theorists” or as “consequentialists”: “I’m a consequentialist” is taken to mean not only that one endorses consequentialism but also that one declines to endorse the other types of ethical theory on offer.

But wait: actual people are "ethical" on a formal, logical sort of level (respecting others' rights through applying the logic of universality) and "ethical" on a situational, emotional sort of level (minimizing felt harm through the capacity for empathy) and they appreciate "good" people who they estimate to be salutary examples of a well-realized person (a “gentleman of Athens”). In fact real ethical people (that is, people when they're actually trying to act ethically rather than merely trying to do ethical theory) use Kantian-style "golden rule" reasoning and Millian outcomes-oriented strategies and they make Aristotelean evaluations of themselves and others all at the same time. “Ethics" turns out to be a heterogeneous concept: the intentions of rational beings, the qualitative experiences of conscious beings and the health or pathology of living beings are all different things, such that there turn out to be not so much differences of opinion among "ethical theorists" as there are changings of the subject. Confusion (and sound and fury) is generated by a presumption that ethical thinking must be one kind of thinking and so there must be one “theory” that gives an account of it. The misleading grammar in this case is the use of a singular abstract noun “ethics,” which creates the strong impression that there is only one topic when in fact there are several that come under that rubric.

The alarmed ethical theorist might speak up at this point: “Too fast.” When David Hume says “Reason is the slave of the passions,” he is making the substantial claim that logical operations are secondary and merely instrumental and that qualitative experience is the primary explanans of “ethics.” When Kant argues that all and only rational beings constitute a “kingdom of ends” he is making a substantial claim that the physical universe portrayed by science (the “phenomenal world”) is valueless qua physical, and that transcendental logical necessity is that explanans. These look to be mutually exclusive claims, and neither is compatible with Aristotle’s view that fulfilling the telos of a living human being is ultimately the aim of “ethical” behavior.

And mutually exclusive they are. But the claim that experience is the only thing we know, or the claim that there are no values in the physical world studied by science and that therefore they must come from somewhere else, are metaphysical and epistemological claims. All philosophy is about metaphysics and epistemology, as unfashionable as it may be to say so these days. And Hume (about whom I will have a good deal more to say in Chapter Three) points out the curious fact that no amount of discussion of physical experience produces any account of programmatic duty, while Kant is moved by his sense of the amorality of the physical world to make the radical claim that the phenomenal world is not, could not be, all that there is. The penultimate difference between Hume and Kant is a difference about the nature of the human mind; like all of the best philosophers their views on both ethics and psychology are systematically motivated by more central positions on epistemology and metaphysics. So if there is a persuasive argument that mind is a heterogeneous concept that argument will extend to the claim that ethics is a heterogeneous concept.


The deeply-internalized intuition that there is some one thing that is the “mind” reflects the plain fact that there is one thing that is the body. For each person the body is singular (at least in our experience!), and once the idea emerged that the mind existed separately from the body (or, at least, that the mind was metaphysically distinct from the body) it was natural to think that there was a one-to-one correspondence between bodies and minds (or “souls”). But the burden of proof is surely on those who would maintain that psychological predicates refer to some one, unanalyzable thing. The metaphysical dualist points to the difficulty we have in providing a naturalistic semantics for psychological terms as a justification for accepting dualism, but we have already seen that the intentional terms and the phenomenal terms resist naturalization in different ways: we might eventually be forced to accept a dualist account of the intentional mind but not of the phenomenal mind, or vice versa, so even a convincing argument for dualism wouldn’t entail that psychological predicates refer to something homogeneous.

As for phenomenal arguments about the unity of perception, apperception, consciousness or what have you, “unity” is exactly what one would expect if one held that in the final analysis psychological predicates referred to embodied beings in physical environments. Kant, one of the greatest and richest philosophers in this field, has to work hard on his account of the unity of mind because he does endorse just the distinction between the rational mind and the conscious (that is physical-world-experiencing) mind that I am stressing here, he doesn’t think that the rational mind can be naturalized and he does think (he fears) that the conscious mind can be. (Strictly speaking Kant’s famous distinction between the “noumenal” and the “phenomenal” worlds is epistemological – the world of experience is that part of the world-in-itself that our minds can feature in a representation – but if rationality is assigned to the noumenal and sensory experience is assigned to the phenomenal then the distinction is equivalent to the one I am making here.) If there were persuasive natural semantics available for both types of psychological predicate (contra Kant who thinks there can be none for intentional predicates) then the “unity of mind” would have been shown to be simply the unity of body: to claim that mind is unanalyzable prima facie is to beg this question.

There is one more objection that cannot be avoided, this one from familiar arguments in the area of personal identity. A defining argument in the area of personal identity is that between advocates of physical continuity and advocates of psychological continuity. At least since the time of Locke the majority view has been that psychological theories of personal identity are more persuasive than physical theories. Imagine (the story goes) that one’s mind has been switched with another (physical) person’s: mind A in body B and mind B in body A. Where (one asks the students) are you now? Most people have the intuition that they go where their mind goes, that is, that they are their mind as opposed to their body if forced to make the choice. It is significant that it does seem possible to conceive of one’s mind separated from one’s body. Isn’t that a problem for any physicalist theory of mind? I think it is, and I will take up the issue of what it is actually possible to conceive, and what that possibility might show, in Chapter Three in the discussion of the “absent qualia” arguments, the possibility of “zombies” etc.

But what is at issue in this section is not the mind/body problem itself but the ground-preparing question of whether there are two problems rather than one. Consider the “memory theory” owed to Locke himself. On this view shared memories are the psychological link that establishes the continuity of self across the passage of time (the old general remembers the brave officer’s battle, the brave officer remembers the young boy stealing the apple and so forth). But if the operationalist holds that memory is a representational system that gains, edits and stores information, this functional ability is not sufficient to constitute selfhood: two beings with the same database are not thereby the same person. And if the phenomenologist is right that no amount of functional description will ever capture the quality of conscious experience then there can be no purely functional account of memory itself, let alone of personal identity.

If, on the other hand, we have a phenomenal account of memory continuity – that would have to be something like “having memories with the identical qualities” – then we get the problem in reverse, since we cannot establish the causal role of consciousness (which is just another way of putting the phenomenologist’s point that we cannot provide a functional account of consciousness). So a phenomenal account of memory (whatever that might be) would also not be sufficient if used to try to establish personal identity. Identity of representational content and identity of qualitative experience are both necessary, but neither is sufficient, for personal identity. Since the reason that neither account of memory is sufficient is that each leaves the other one out that establishes that they are two different things.

To summarize, my claim is that there are two metaphysical problems for the naturalization of psychology. My method is to look at the metaphysical commitments – the semantics – of the vocabulary of psychological predication. This vocabulary divides into two sets of words. First there is the intentional vocabulary. This consists of words like “belief,” “desire,” “hope,” “fear” and so on. Use of these words appears to commit us to the existence of rationality and mental representation; I will use the word intelligence to refer to the intentional mind in toto. The other set is the phenomenal vocabulary. This consists of words like “sensation,” “pain,” “taste,” “texture” and so on. Use of these words appears to commit us to the existence of consciousness. Operationalist theories are theories about intelligence; phenomenal theories (which are rather thin on the ground, for reasons I will discuss in Chapter Three) are theories about consciousness.

Once one sees that there are two mind/body problems, not one, it is possible to address each problem in turn. Chapter Two breaks down the problem of intentionality further, developing the distinction between the problem of mental representation and the problem of rationality, and offers two respective arguments to naturalize the semantics of intentional predicates. Chapter Three offers arguments to the effect that the problem of phenomenology is a pseudoproblem and then explains how phenomenal predicates can be naturalized as well. The arguments in the two chapters are different responses to different metaphysical problems, but taken together they may work towards a naturalistic semantic for psychological predicates. In the more speculative Chapter Four an account of the nature of the relationship between intelligence and consciousness is proposed that reflects the conclusions of the earlier chapters.

Sunday, October 17, 2010

Consciousness: the other horn of the dilemma in philosophy of mind

I take Turing’s thought experiment to be entirely persuasive, with the radical and happy outcome that, among other things, it reveals the old epistemological chestnut “the problem of other minds” to be a pseudoproblem (Wittgenstein emphasizes this). There is another famous gedanken-style argument in the philosophy of mind that I find equally persuasive, owing to John Searle: the Chinese Room Argument. I found both the Turing Test and the Chinese Room Argument to be rather fast and baffling at first, and then I went through a period of doubt and resistance, but I cannot find any argument that shows either of them to be fallacious or misapplied (and many, many have tried). I now feel certain that they are both correct. The only problem is that they are mutually contradictory.

Imagine, Searle asks, a person in a room. The room has a slot where people outside the room can enter printed notes and another slot where he can put out notes in response. This person cannot read or speak Chinese. He has two things: a large cache of Chinese characters (maybe he has a Chinese-character typewriter), and a set of instructions. The instructions are purely formal: for each Chinese character or set of characters that comes in to the room, there is specified a character or set of characters to be put out. Chinese-speakers write notes and put them into the room: “What is the capital of France?” say, or “What is your favorite food? Mine is chocolate.” or “I plan to vote for Obama, but my brother disagrees.” The person in the Chinese Room examines the characters, finds them in the instruction manual, and prints out the responding characters that are specified there. The instructions are such that the Chinese-speakers are satisfied that they are conversing with an intelligent being, one that knows something about geography and any number of other topics and can converse about food, politics, relatives and so on.

According to Turing, the Chinese-speakers (and everyone else outside the room) would have to conclude that the Chinese Room was intelligent. In fact the Chinese Room just is intelligent (no inference is necessary) since, on the operationalist view, “intelligence” consists of nothing more nor less than this kind of intelligent behavior; there is no question of being wrong here. On the contrary, Searle argues that the Chinese Room knows nothing. Neither the person in the Room nor the Room as a whole has any idea what the topic is, or even that there is a topic: not even that the characters mean anything at all. The Chinese Room is, according to Searle, a formal rule-governed symbol-manipulating device and nothing more, and as such it knows nothing at all, and nothing that knows nothing can be considered “intelligent.” A thing lacking all awareness is not an intelligent thing.

Searle’s specific target is computationalism, the view that (human) cognition is a form of computation, in other words that intelligent humans are formal rule-governed symbol-manipulating systems. He doesn’t think that an intelligent artifact is impossible, because he’s a materialist: he accepts that an artifact with the same relevant causal properties as a human body would have the same kind of intelligence. It’s just that a computer is not that artifact. A computer can have a data-base as full of symbolic representations (words, pictures) about Paris as you like, but it is only the human user who can grasp what the symbols represent. And what is that? Cheese shops full of hard parmesan and soft camembert, well-dressed people whizzing by on motor scooters, cigarette butts stuck in the metal grid floors of the Metro: a specific place full of sounds, smells, textures, tastes and scenes.

The taste of the wine, the smell of the cigarettes, the feeling that the well-dressed people don’t admire your ensemble: these are conscious experiences. Humans have them, computers do not. Only beings who have conscious experiences (who are, that is, conscious) can know what a symbol stands for, because “knowing” consists of an appreciation of the quality of the relevant experiences. A human doesn’t even have to have been to Paris to get some feel for the place; they can read about it on their computer screen! No amount of increase in the computational power of a mere formal rule-governed symbol-manipulating device will be sufficient for understanding absent this capacity for qualitative experience. This capacity is consciousness.

But how is it that consciousness is a metaphysical problem? Here is another famous gedanken-style argument, this one owed to Frank Jackson, which makes the metaphysical nature of the problem clear. Imagine Mary, a color-blind color-vision specialist. Mary is an expert on the science of color perception. This involves a great deal of scientific expertise: Mary knows about the physics of light, for example about how red light has a spiky amplitude and blue light a flat one; she knows about the light-absorbent and –reflective properties of surfaces; she understands the way the rods and cones on the back of the retina measure the amplitude of light and accordingly stimulate the optic nerve; she knows about the visual cortex and how the cells are arranged and connected there. Let’s say Mary is the world’s foremost color-vision specialist. Let’s even idealize Mary a little bit: let’s say that she is in possession of the complete and correct physical description and explanation of color vision, from the physics of light to the neurophysiology of perception. She knows all there is to know, and she’s got it all right.

Mary is color-blind. She has never seen a blue or red surface, only blacks, grays and whites. That is, she doesn’t know what colors look like. Sadly, she does not have the capacity for the relevant qualitative experience (I’ve always suspected that Mary has over-compensated for her disability in the pursuit of her chosen career). If this is right, then a complete and correct physical description and explanation of experience is lacking some information: what it is like to see colors (to use a phrase made famous by yet another exponent of the problem, Thomas Nagel). Now we have another putative mental “property,” and like the semantic property it appears to be unanalyzable into physical properties. There is even a noun, quale (singular of qualia), that denotes these qualitative feelings: the quale of this bite of chocolate I’m taking is this particular taste-sensation that constitutes my being conscious of the chocolate in my mouth. Conscious experience consists of qualia and qualia are not analyzable into, identifiable as, or reducible to physical properties.

Thus psychology cannot be naturalized. There is something called phenomenal description (the description of the quality of experience) that necessarily is always distinct from physical description. The study of experience qua experience is called phenomenology, but I will call the metaphysical problem, following the usage in contemporary philosophy of mind, “the problem of consciousness.” This is the subject of Chapter Three. There is a close connection between this problem as it is framed by contemporary philosophy of mind and the much older philosophical problem of the possibility of a radical difference between our experience of the world and the world as it actually is. In modern philosophy it is more common to put this as an epistemological problem (for example in the literature of skepticism). Both the English-language phenomenalists and the Continental phenomenologists of the early 20th century wanted to put metaphysics behind them, but I will maintain that progress here can only be made in the context of an explicitly metaphysical discussion. Nor would my conclusions be congenial to philosophers of that era: I will argue that the phenomenalists were in the grip of a disastrous misinterpretation of Hume and that phenomenology is impossible.

Like most people I tend to be drawn towards symmetry. Alas, Chapters Two and Three do not have symmetrical arguments. Whereas I break the problem of intentionality down into two constituent problems, the problem of representation and the problem of rationality, and offer positive theories to handle both, I will argue that the problem of consciousness is in fact a pseudoproblem and thus not amenable to (or in need of) any “theory” at all. Nonetheless even if one is persuaded, as I am, by the argument that the problem of consciousness is a pseudoproblem it turns out that there still remains something to say about metaphysics and consciousness and that discussion forms the second part of Chapter Three.

Philosophy of mind finds itself, at the beginning of the 21st century, to be at something of an impasse. For much of the 20th century operationalists had an agenda stable enough and productive enough that they were able to basically ignore the challenge of the phenomenologists, although the rejection of behaviorism as a popular psychology, after a long battle from Aldous Huxley’s iconic Brave New World through B. F. Skinner’s incendiary Beyond Freedom and Dignity, made the problem clear enough. (A crucial exception was Wittgenstein, but I will save that discussion for Chapter Three.) Gradually the dam broke and by the end of the 1980s thanks to Searle, Jackson, Nagel and others the post-“Analytic,” English-language philosophy of mind community acknowledged the problem of qualia as a central problem, and today one of the most thriving branches of the field, quite at home with the scientific neighbors in the area of “cognitive studies,” is “consciousness studies.” I will call those who take the problem seriously the “phenomenologists” although no doubt some will think that term comes with too much baggage; I ask the reader’s indulgence for the sake of exposition.

These new phenomenologists quickly set about demonstrating the inadequacy of functionalism and operationalist approaches in general as comprehensive theories of mind. For any qualitative experience (any quale) that appears to have a causal role in the production of behavior, the argument goes, one can conceive of a being with the functionally equivalent behavior but not the quale (a number of these “absent qualia” arguments, while mostly to the same point, are important enough to get their own discussion in Chapter Three). This might seem to be more of a problem for the advocates of phenomenology than it is for the advocates of operationalism but the opposite is true: if a functionally complete description and explanation of a person lacks any description or explanation of consciousness then functionalism is in the same position as Jackson’s Mary gedanken appears to put physicalism in general: it is not a complete theory of mind. In the literature this is often tagged as the “zombie” problem: the zombie is the allegedly conceivable functionally-complete but consciousness-lacking person.

The phenomenologists, for their part, have often accepted that the problem of consciousness does indeed thwart the naturalization of psychology, just as their older Continental namesakes did (although with considerably less enthusiasm). For example there is a well-developed line that a “property” dualism is inevitable, a kind of epistemological dualism that does not commit one to actual metaphysical dualism. I don’t think so: I think that metaphysical physicalism entails epistemological physicalism, on the grounds that that is the only possible significance of such a metaphysical assertion. There is a group that calls itself the “mysterians,” who argue that we just have to concede that there is no accounting for the relationship between the physical and the phenomenal. And one of the most noted writers on the topic in recent years, David Chalmers, had considerable success with his suggestion that metaphysical dualism is the right theory after all (admittedly the suggestion is made in a Berkelean spirit: we should just concede metaphysical dualism and move on). An exception to these various consuls of despair is Searle, and that is another discussion elaborated in Chapter Three. But with exceptions the phenomenologists find themselves with an apparent refutation of operationalist theories but without a coherent theory of their own.

The book you are reading is titled The Mind/Body Problems; the aim of the title is to draw your attention to the plural. The next section is, I think, straightforward, but it is one of the most important sections of the book.

Sunday, October 10, 2010

The first horn of the dilemma in contemporary philosophy of mind

We are put onto the horns of our current dilemma by good arguments, not bad ones. The first line of argument at the heart of contemporary philosophy of mind is exemplified by Alan Turing’s work and his “Turing test,” although perhaps the most important elaboration of the line is that found in the writings of Ludwig Wittgenstein, and the whole approach has its roots in the empiricism of David Hume. Hume argued that we were on firm ground when we could specify experiences that grounded our descriptions of and theories about the world. Hume identified “metaphysics” with the traditional, pre-empiricist philosophy of the “Schoolmen,” as he called them, and he is a typically modern philosopher in that he imagined that he had done away with a great deal of traditional philosophy altogether; at least, that was his aim. He understood that this radical empiricism had radical implications for psychology: he denied that there was anything that could be called the “mind” other than the bundle of perceptions and thoughts introspection revealed, and questioned whether anything that could be called the “self” (other than the perceiving and acting body) could be said to exist, for the same reasons. The “mind” and the “self” were for Hume too close in nature to the “soul,” a putative non-physical entity of the sort that the Enlightenment empiricist wanted to eliminate along with angels and ghosts.

The early 20th century heirs to Hume were the behaviorists. Too often today behaviorism is regarded solely as a failed movement in the history of 20th century psychology, but it is important to appreciate that behaviorism was an attempt, and a very powerful, respectable and still-interesting attempt, to naturalize psychology. It is also important to see that the motivation for developing behaviorism for the empiricist-minded philosophers and psychologists of the time was essentially metaphysical. The ghostly mental entities, figuratively located “in the head,” that were the nominal referents of psychological descriptions and explanations (“beliefs,” “desires,” “attitudes,” etc.) had to be washed out of the ultimate, natural semantics. Behaviorism proposed to naturalize psychology in a simple way: stick to a strict empiricist methodology. If the methodology of science was adhered to, ipso facto psychology would be a science. For present purposes “behaviorism” can be defined as the view that psychological predicates (“He believes that Boston is north of here,” “She is hungry”) refer in fact to observable dispositions to behave: behaviorism is a good example of “theory of mind” as semantics of psychological language.

Behaviorism is a full-blown theory of mind (a general semantics for the psychological vocabulary) that eliminates any reference to anything “in” the mind. On one interpretation this is simply a methodological prohibition on psychologists who aspire to being “scientific” from referring to these “inner” (that is, unobservable) mental states and processes. This version is variously called “soft,” “methodological,” “psychological” or (my coinage) “agnostic” behaviorism. A more radical interpretation is that the inner is an illusion, a historic misconception. This more radical version, the leading avatar of which is Wittgenstein, is variously called “hard,” “metaphysical,” “philosophical” or “atheistic” behaviorism. I don’t want to get sidetracked here by the complicated story about behaviorism’s varieties and the varieties of problems and objections behaviorism encountered. Just now what we need is to grasp and appreciate what was powerfully persuasive (and enduring) in the empiricist line of theory of which behaviorism is an example.

Alan Turing, thinking about computation and computing machines, took a behaviorist approach to the word “intelligence.” He famously proposed the “Turing test”: when an intelligent, sane and sober (that is, a somewhat idealized) person, interacting with a machine, can no longer see any difference between the outputs of said machine and the outputs of an intelligent (etc.) person, at that point we will have to concede that the machine is (actually, literally) intelligent as well. Machine intelligence will have been achieved. “Outputs”: the Turing test is usually conceived as a situation where there are a number of terminals, some connected to people, some to machines. Human interlocutors don’t know which are which. Questions are asked, comments are made, and the terminals respond; that is, there is linguistic communication (there is actually an annual event where this situation is set up and programmers enter their machines in competition). Turing himself never saw a personal computer, but he was conceiving of the test in roughly this way.

However, “outputs” could be linguistic, or behavioral (imagine a robot accomplishing physical tasks that were put to it), or perhaps something else (imagine an animated or robotic face that made appropriate expressions in response to peoples’ actions and statements). Nor does the candidate intelligent thing need to be an artifact, let alone a computer. I am following Turing in sticking to the deliberately vaguer word “machine” (although it’s true that Turing theorized that intelligence, wherever it was found, was some species of computation). Imagine extraterrestrials that have come out of their spaceship (maybe we don’t know if they’re organisms or artifacts), or some previously unknown primate encountered in the Himalayas, say. The point is that in the case of anything at all, the only possible criteria for predicating “intelligence” of the thing are necessarily observation-based. But after all, any kind of predication, psychological or otherwise, is going to depend for its validity on some kind of observation or another (“The aliens are blue,” “The yeti is tall”), and psychological predicates are no different.

Wittgenstein gives perhaps the most persuasive version of this argument in what is usually called his “Private Language Argument.” Wittgenstein holds that language is necessarily intersubjective. (In fact he thinks that it is not possible for a person to impose rules on their self, so ultimately he thinks that a private language is impossible, but we don’t need to excavate all of the subtleties of the Private Language Argument to see the present point about the criterion of meaningfulness, which is fairly standard empiricist stuff). If I say to you, “Go out to the car and get the blue bag,” this imperative works because you and I have a shared sense of the word “blue.” Without this shared, public sense communication is impossible, as when people are speaking a language that one can’t understand. Psychological words, just like any other kind of words, will have to function in this intersubjective way: there will have to be some sort of intersubjective standards or other for determining if the words are being used correctly (the two of us have to be following some shared set of rules of use). Wittgenstein emphasizes the point that, to the extent that psychological predicates are meaningful at all, they cannot be referring to anything “inner,” known only to the subject of predication. And for all of the problems and failures of the original behaviorist movement, it is hard to see anything wrong with this central point.

The term of art for any theory of mind that says that psychological words necessarily have to conform to publically, intersubjectively established standards and procedures of use in order to make sense is operationalist. Behaviorism is a kind of operationalist theory, and so is functionalism, to which I now turn, so I need the word “operationalist” to use when I want to refer to these kinds of theories of mind in general. Operationalist theories appear to handle some critical problems in the philosophy of mind, and constitute the first horn of our dilemma.

Functionalism can be defined as the view that psychological predicates refer to anything that plays the appropriate causal role. That’s a bit gnostic so I will unpack it with some history. Remember that according to Turing there is no difference between a human and a machine qua intelligent being once the machine’s intelligent performance is indistinguishable from the human’s. Acting intelligent, on an operationalist view, is just being intelligent, just as sounding like music is just being music. “Being intelligent” breaks down into many (perhaps indefinitely many) constituent abilities. For an easy example take learning and memory. Part of being intelligent is being able to learn that there are people in the house, say, and to remember that there are people in the house. Both an intelligent human and an intelligent machine will be able to do this. But the human will do it using human sensory organs and a human nervous system, while the machine will have physically different, but functionally equivalent, hardware.

This is the problem of the multiple realizability of the mental. It is one of the deepest metaphysical issues in the philosophy of mind. Around the middle of the 20th century philosophers of mind concluded that a literal reductive materialism, for example the identification of a specific memory with some specific physical state in a human brain, or of remembering itself with some specific physical process in human brains, committed a fallacy often referred to in the literature as “chauvinism.” These philosophers weren’t the first to see this: Plato and Aristotle, for example, not only saw this problem but developed some of the best philosophical analyses of the issue that we have to this day. I want to stress that once we accept any kind of operationalist theory, the problem of multiple realizability is undeniable. Humans, dolphins (among other animals), hypothetical intelligent artifacts and probably-existing intelligent extraterrestrials will all take common psychological predicates (“X believes that there are fish in the barrel,” say, or “X can add and subtract”). In fact the extension of the set of beings who will take psychological predicates is indefinitely large and does not appear to be fixed by any physical laws.

Functionalism, like behaviorism, is motivated by essentially metaphysical concerns, in the case of functionalism by the problem of the multiple realizability of intelligence. Functionalism abstracts away from hardware and develops a purer, more formal psychology: any intelligent being, whatever they may be made of, whatever makes them tick, will have (by definition) the ability to learn, remember, recognize patterns, deduce, induce, add, subtract and so forth. Although the more enthusiastic advertisements for functionalism like to point out (rightly enough, I suppose) that functionalism, in its crystalline abstraction, is even compatible with metaphysical dualism, functionalism is best understood as a kind of non-reductive materialism. That is, while the general type “intelligent beings” cannot be identified with any general type of physical things, each token intelligent being will be some physical thing or another.

This extends to specific mental states and processes as well, of course: the human, the dolphin, the Martian and the android all believe that the fish are in the barrel, they all desire to get to the fish, and they all understand that it follows that they need to get to the barrel. Each one accomplishes this cognition with its physical body somehow, but they all have different physical bodies. There is token-to-token identity (that’s the “materialist” part), but there is no type-to-type identity (that’s the “non-reductive” part). It is not coincidental that functionalism has been the most influential theory of mind in the late 20th century, the age of computer science. The designer (the psychologist) sends the specifications down to the engineers (the computer scientist and the roboticist): we need an artifact with the capacity for learning, memory, pattern recognition and so on. The engineers are free to use any materials, devices and technology at their disposal to devise such an artifact.

This realization that functional descriptions do not analyze down to physical descriptions (a realization at the center of Aristotle’s writings) is a great advance in philosophy of mind. It changes the whole discussion of the metaphysics of intelligence and rationality in a decisive way. In Chapter Two I will argue that operationalist theories in general can indeed provide an intuitively satisfying naturalistic semantics for predications of cognition, intelligence and thinking. To close this introductory discussion of the first horn of the dilemma I will quickly sketch the way operationalist theories also can be deployed to address another core metaphysical problem, the problem of mental representation and mental content. Then I will be able to define one of the most important terms in this book and one of the most difficult terms in philosophy of mind: “intentionality,” the subject of Chapter Two.

“Representational” theories of mind hold that it is literally true that cognitive states and processes include representations. To some this may seem self-evident: isn’t remembering one’s mother, for example, a matter of inspecting an image of her “in the mind’s eye”? Isn’t imagining a tiger similarly a matter of composing one’s own, private image of a tiger? There are reasons for thinking that mental representations must be formal, like linguistic representations, rather than isomorphic, like pictorial representations: How many stripes does your imaginary tiger have? Formal representations, like novels, have the flexibility to include only relevant information (“The Russian guerillas rode down on the French encampment at dawn”), while isomorphic representations, like movies, must include a great deal of information that is irrelevant (How many Russians, through what kind of trees, on horses of what color?). While there are those who argue for isomorphic representation, most representational theorists believe that mental representations must be formal rule-governed sets of symbols, like sentences of language. The appeal of such a model for those who want to approach cognition as a kind of computation is obvious.

Some of these issues between species of representational theory will be developed in Chapter Two, but for introductory purposes four more quick points will suffice: First, why mental representation/content poses a metaphysical problem; second, how we can define the often ill-defined word “intentionality”; third, which psychological words are taken by representational theorists to advert to mental content; and finally, how operationalist theories might be successful in addressing the metaphysical problem of representation.

The metaphysical problem is that symbols per se seem to have a “property,” the property of meaning, which does not appear to be analyzable as a physical property. This issue is addressed in philosophy of language, but language and other symbol-systems are conventional (albeit the products of long evolutionary processes); the location of the ur-problem is in philosophy of mind. Consider the chair in which you sit: it does not mean anything. Of course you can assign some arbitrary significance to it if you wish, or infer things from its nature, disposition and so forth (“All of the world is text”), but that doesn’t affect the point: physical objects in and of themselves don’t mean anything or refer to other things the way symbols do. Now consider your own, physical body: it doesn’t mean anything any more than any other physical object does. Nor do its parts: your hand or, more to the point, your brain, or any parts of or processes occurring in your brain. Your brain is just neural tissue humming and buzzing and doing its electrochemical thing, and the only properties included in our descriptions and explanations of its workings are physical properties. But when we predicate of a person mental states such as “He believes that Paris is the capital of France,” or “She hopes that Margaret is at the party tonight,” these mental states appear to have the property of referring to, of being about, something else: France, or Margaret or what have you. It looks, that is, like the mental state has a property that the physical state utterly lacks.

I can now offer a definition of “intentionality.” In this book, intentionality refers to two deeply intertwined but, I will argue, separable metaphysical problems: 1) the problem of the non-physical property of meaning that is implicit in any representational theory of mind (I will call this “the intentional property” or sometimes “the semantic property”), and 2) the problem of rationality, that is, the apparent lack of any physical parameters that could fix the extension of the set of beings that take predicates of rationality (or intelligence). The intentional vocabulary consists of words like “belief,” “desire,” “hope,” “fear,” “expectation,” “suspicion,” the word “intention” in its ordinary use etc. Psychological predication using these words is often called “intentional psychology” or “belief/desire psychology” or sometimes (usually pejoratively) “folk psychology.” The intentional vocabulary consists of all and only those words that appear to entail mental representation, often referred to in the literature as “that-clauses,” as in {A belief that “Paris is the capital of France”}, or {A hope that “Margaret will be at the party tonight”}.

On a widespread representationalist view these are propositional attitudes, in the respective examples the belief that the proposition “Paris is the capital of France” is true and the hope that the proposition “Margaret will be at the party tonight” is true. It is commonly suggested that, since these intentional states are individuated by the content of the propositions towards which they are attitudes, propositions must be represented somehow in the mind. Such a view commits one to the existence of the non-physical “property” of meaning. This is not (or at least not entirely!) an abstruse argument amongst philosophers: any model of the nervous system as an information-processing device makes this commitment, and the most cursory perusal of standard neuroanatomy textbooks is enough to see that they are saturated with this kind of language.

On my view naturalizing psychology requires that putatively non-physical “properties” be washed out of the final analysis in favor of solely physical properties (the only kind there are). That is, I think that representational theories of mind are false. To use the term of art in theory of mind, I am an eliminativist about mental representation and content. Mental representation will be the main topic of the first part of Chapter Two, which in many ways is the heart of the book. To conclude this introductory section I will briefly sketch how operationalist theories of mind might open the way toward an acceptably naturalistic semantics of the intentional vocabulary.

Behaviorism is also a kind of eliminativist theory: behaviorism eliminates (from the semantic analysis of the psychological vocabulary) anything unobservable at all, including private “inner” mental states and processes. Functionalism, behaviorism’s more sophisticated progeny, acknowledges that states and processes “in the head” (that phrase may be taken either literally or figuratively here) play causal roles in the production of behavior (“The thought of X reminded him of Y and he started to worry that Z…”), but still manages to rid the analysis of psychological predication of reference to mental states (to intentional states, in the present case). It does so by describing cognition functionally rather that physically. Take any sentence that includes an intentional phrase, say: “At the sight of his mother’s photo he remembered the crullers she used to bake, and this memory motivated him to go to the grocery and buy sugar, butter and unbleached flour.” The representationalist is, it would seem, committed to the view that a representation of the crullers is playing a causal role here. But a functional description of the cognitive process can substitute a generic functional-role marker thus: “At the sight of his mother’s photo he X’d, and this X motivated him…etc.” Now “X” can stand for anything that plays the appropriate functional role, and obviously this no longer commits us to the existence of representations or of anything else with non-physical properties.

As I said, the two problems of intentionality (the problem of rationality and the problem of mental content) are further separable. In Chapter Two I will first develop a naturalistic semantics for intentional predication, one that is eliminativist about mental content. Then I will offer a second argument about the problem of rationality that relocates the metaphysical problem outside of philosophy of mind. Both of these arguments acknowledge the validity of the operationalist maxim exemplified by the Turing Test: outside of some formal, intersubjective standards for identifying intelligence through public observation there can be no justifiable reasons for predicating intelligence of a being or for refusing to do so.

Saturday, October 2, 2010

Metaphysics and method

Metaphysics (or “ontology”) is the study of what exists (Aristotle called it the study of “being”). To many people today metaphysics seems anachronistic. Haven’t we settled the issue of what exists, they might ask, in favor of the physical universe? And isn’t natural science the way we produce knowledge about this universe? How could more work in metaphysics possibly generate any persuasive arguments, if “metaphysics” is not simply “physics”? Arguments about the relationship between the mind and the body that aren’t grounded in empirical research of some sort can’t hope to be legitimate in a world awash in data from experimental psychology, neuroscience, computer science, evolutionary biology, linguistics and the myriad of interdisciplinary areas of research that today we call “cognitive studies.” Isn’t a metaphysician a mere poet of speculation? Diverting at best, but such a person has no hope of producing useful knowledge. That, anyway, is the drift of the reaction one frequently gets when proposing to discuss the metaphysics of the mind/body problem. I will respond to this initial “meta”-challenge in two ways.

First, I completely embrace the spirit, and much of the letter, of this initial objection. I too take it as axiomatic that what exists is the physical universe (by “physical” I mean the universe of matter and energy, or maybe matter/energy; I don’t pretend to be sophisticated about theoretical physics). I don’t think that humans are composed of physical bodies and non-physical souls, like a traditional mind/body dualist. I think that humans are physical through and through, animals that evolved here on earth through a long process of evolution the contingencies of which were, and continue to be, bounded by the constants of biology, chemistry, and physics. I don’t expect to discover that humans are angels, or that the physical universe is an illusion and humans are non-physical spirits, or anything like that.

To be more specific, I have a view about the metaphysics of humans that I call anti-humanism, a loaded phrase taken as rhetoric but used here only to mean this: the universe may be as magical, mysterious and mystical as it may be; I don’t know anything about the ultimate composition or nature of the universe. I have no interest in making a brief for reduction, as if natural science has already revealed the nature of being, or even potentially could. I don’t even know what we’re talking about when we use that kind of language. My claim is much humbler: whatever nature in general is like, humans are like that. Humans are not miracles, if a “miracle” is defined as an exception to the laws of nature. I hold the anti-humanist view simply because I know of no reason to think that humans are miracles; I stress it because a deeply internalized assumption of human exceptionalism continues to be a barrier to progress across the whole area of cognitive science.

Which brings me to the second response to the objection that metaphysics is anachronistic: it is certainly not true that the contemporary community of educated people embrace anti-humanism as I just defined it. For one thing, a great many college students, most people walking down the street and the overwhelming majority of the world’s population today continue to think that the mind (taken, that is, in the metaphysical sense of some thing that exists) is something distinct from the body or, at least, that mental phenomena cannot be adequately described and explained in wholly physical terms. This conviction has many variants that range from traditional, usually religion-based beliefs about souls, afterlives and so forth to more modern notions, such as the view that a naturalistic view of human nature is perniciously reductive and to be resisted by the liberal-minded, or perhaps that science itself is nothing but a socially-constructed “conceptual scheme” with no particular claim to legitimacy, and so on. For another thing, very sophisticated versions of human exceptionalism exist in the academy (for example among some linguists), such that it is by no means established conventional wisdom that physical science subsumes psychology by metaphysical axiom.

That’s why the topic continues to be exciting. We live in a world where most natural phenomena, from the micro level of atoms, cells and molecules up to the macro level of galaxies and the universe itself, seem to be describable and explicable in physical terms. Physicalism (I mean by this term the metaphysical position that only the physical universe exists) is not totally triumphant (and it is a reasonable point that contemporary physics itself presents us with a still-mysterious and newly-strange picture of the universe). There are ongoing popular metaphysical arguments about evolutionary biology and about cosmology, for example. But it is a striking fact about contemporary culture that psychology (and by extension the behavioral and social disciplines) are still not considered to be integrated into our otherwise generally physicalist metaphysics. Put another way, while many people today have firmly internalized physicalist intuitions about organic life, say, or about distant celestial objects, physicalist theories of mind still meet with resistance even among secular, science-educated people.

A note on terminology: Consider three words, “materialism,” “physicalism,” and “naturalism.” There is a worthwhile philosophical discussion to be had about the relationships and differences between these three concepts. “Materialism” might be the view that matter (matter/energy) is the only thing that exists, “physicalism” the view that the physical universe is the only thing that exists, and “naturalism” the view that only nature exists. Obviously there is a lot of fleshing out to do to make those terms very coherent. I’m not going to work on that here. Connotatively “materialism” sounds the most reductive, “naturalism” the most open-minded, so people inclined to inject ideological considerations will linger on these distinctions, no doubt a potentially useful thing to do but not something that particularly interests me in the present context. I am going to paint with a broader brush and assume that my charitable readers will get the larger drift: these words all point towards a broadly naturalistic monism, versus metaphysical heterogeneity or dualism. I will mostly, but not exclusively, use the term “physicalism” and stick with my definition that this is the view that only the physical universe exists.

Metaphysics is not something that is replaced by physics. Physicalism is a particular metaphysical position. Everyone has metaphysical assumptions, articulated or not, whether they want to or not, and they always will. The person who chafes at the idea that there is still a need for explicitly metaphysical discussion is claiming that our shared metaphysical assumptions are currently stable, not that “there is no such thing as metaphysics,” although they may unreflectively put it that way. I agree that physicalism is currently the ruling metaphysical paradigm, at least among cognitive scientists, psychologists, philosophers and so on, and I too labor within this paradigm, albeit with some important qualifications that are discussed in the second part of Chapter Two.

It’s not enough, however, to just assert one’s acceptance of a broadly physicalist, or naturalist, metaphysical attitude. Our work here is not done. For dualists, including many philosophers working in the classical, medieval and early modern periods of European philosophy through to 19th century transcendental idealism, the “mind/body problem” was the problem of explaining the interaction of the physical with something non-physical. Plato and Descartes are examples of excellent philosophers who saw the problem this way. For the physicalist the problem is different (and, yes, there are third ways, such as Spinoza’s “double aspect” approach, that are important and useful and that will be discussed where appropriate). The physicalist wants to naturalize psychology: to integrate psychology into the broader naturalistic worldview. And that we have yet to do. To see this, I’ll conclude this preliminary discussion of metaphysics in general and get to the specific problems of philosophy of mind.

If “metaphysics” still sounds too far out, consider the relationship between metaphysics and semantics. Say someone is talking about angels. The word “angel” is the subject-noun in their sentences. In good faith (pardon the pun), we would like to understand what they are saying to us. “What are you referring to,” we might ask, “by this word ‘angel’?” If, as one might suspect, it turns out that our friend means to be referring to non-physical entities, some people will demure because they doubt the existence of non-physical entities per se. This is the source of the old cliché of the philosopher as someone who insists that we “define our terms”: philosophers are sensitive to the metaphysical assumptions revealed by language (and of course we’re all philosophers, just as we’re all musicians; these are basic questions that everyone asks, just as everyone enjoys a good tune). In that sense, metaphysics and semantics come to the same thing.

It’s not just entities but also properties that are part of existence, or of what we refer to as existing. In philosophy of religion (a useful example just because most people have thought about it) we find references to metaphysically interesting entities: God, angels, heaven and so on. In other areas, such as aesthetics and ethics, we find references to all sorts of metaphysically interesting properties: beauty, goodness, justice etc. Just as we might be skeptical of the existence of non-physical entities, so we might be skeptical of the existence of non-physical properties. That is, a physicalist might hold that all properties are physical properties (that is, that only physical properties exist) just as they hold that only physical entities exist. In fact we can just define “philosophy” as metaphysics and epistemology. Any discourse that makes metaphysically and epistemologically unusual references comes under the purview of the philosopher. These topics include (but are not limited to) aesthetics, ethics, logic, mathematics, politics and society, religion, the nature of science itself…and psychology.

Another introductory note on terminology: By “psychology” I don’t mean the academic discipline that goes by that name and in which specialists receive formal degrees. I mean the ordinary, everyday discourse, practiced by everyone, that we traditionally use to explain behavior. “Why did he leave the room?” “He wanted a drink of water.” “Does she like chocolate?” “Yes she does.” “Are you in pain?” “I’m OK.” I mean nothing more nor less by “psychology” than that kind of talk, and the assumptions and conceptualizations that underlie it. That is where we find the metaphysically interesting language.

Psychological language makes frequent reference to all sorts of mental entities and properties. In religious talk (to stick with the most obvious analogy) we find words like “God,” “angels,” “faith,” “prayer” and so on. When I say these are metaphysically interesting words I mean that they don’t seem to function in the way that grammatically similar words from more quotidian discourse do. I can understand “The table is in the room” without any metaphysical trouble. I cannot do the same with “God is everywhere.” English speakers typically have no trouble understanding the use of the existential verb “to be” in sentences about tables, but they do in sentences about angels. For an epistemological example, one does not have faith in one’s religion in the same way that one has faith in politicians (or vice versa!). The verb “to know” is being used in an unusual way. Philosophical enquiry is needed.

Similarly in psychology we refer to, among other things, “beliefs,” “desires,” “hopes” and “fears,” also to “pains,” “sensations,” “textures” and “tastes.” (In what follows I will identify and distinguish two fundamental kinds of psychological words. That will be some of the major business of the book, so for now I will just stick to this preliminary discussion of metaphysics and method.) It’s clear enough that these are metaphysically interesting words. I don’t “have” beliefs and desires the same way that I have nickels and dimes. I don’t even have any fixable number of beliefs and desires, whereas the current number of nickels and dimes in my possession is all too definite. Sometimes I can see that my friend has a bag in his hand and sometimes I can see that my friend has a pain in his hand, and seeing it (with my own eyes) is (outside of a philosophy class!) all I normally need in order to know it, but so different is seeing his pain from seeing his bag that many people are willing to say (but usually only inside of a philosophy class) that I don’t actually know about his pain at all.

To naturalize psychology would be to give an account of the meaning of psychological words such that they were no longer metaphysically interesting. Another way to put this is that a physicalist theory of mind is identical to physicalist semantics of psychological words. The whole enterprise is revealed to be much less outlandish than it initially appears once one sees that we are talking as much about the word “mind” as we are talking about minds, as much about the word “belief” as we are talking about beliefs, as much about the word “sensation” as we are talking about sensations, and so on. Nor do I have any aspiration, as some contemporary philosophers of mind do, to change the way we talk (in fact I will explain my reasons for some doubts that we could even in principle do this).

It’s not normative psychological language that has problems. Normative psychological language is chugging along out there in the real world just fine. It’s contemporary philosophy of mind that has problems. As a philosopher of mind myself I experience this personally: there are currently two types of theories of mind, in response to two problems, and I find the arguments that motivate the respective problems persuasive, and I find the respective theories that are offered in response to the two problems to be intuitively more or less satisfying. It’s just that they are apparently mutually contradictory. It is that experience of internal contradiction in my own thinking about the subject that has motivated the present book. Without it I wouldn’t, couldn’t have developed a contribution even possibly original enough to justify writing yet another book on the philosophy of mind after several decades when we have been awash in them, many of them written by some of our best living philosophers.