FUNCTIONALISM

Philosophy of Mind
Curtis Brown

There are (at least) two main sources of functionalism as a response to the mind-body problem. I will describe both of them, and then offer a general characterization of functionalism and a discussion of some of the main objections and responses.

First Source: Turing Machines

Like most views about the mind, functionalism is motivated in part by an analogy with a certain sort of machine--in this case, the modern digital computer. But it is useful to approach the view by discussing, not IBM mainframes or Apple Macs, but Turing Machines. Turing machines are not real machines (although it is easy to simulate or, more accurately, instantiate one, except that in theory Turing machines have infinite memory and no actual machines do). Turing machines are useful in thinking about functionalism because there is good reason to think that any function which is computable at all is computable by a Turing machine (though probably much slower and more clunkily than by your typical Macintosh or IBM), and it is easier to see some of the basic concepts of computing in the very simple case of the Turing machine.

What is a Turing machine? To begin with, imagine an infinitely long tape, divided into squares. This serves as the machine's memory. Then think of the machine as a kind of box which sits over, and scans, one box of the tape at a time. Each square of the tape may either be blank or have a 1 written on it; we can think of B (for "blank") and 1 as the possible inputs to the machine.

The machine also has a number of possible outputs, a number of things it can do. It can:

We could also let the machine print other symbols, e.g. the letters of the alphabet, but it turns out that this will not increase the number of functions it can compute (since any letter of the alphabet could be encoded as a series of ones and blanks, rather in the fashion of Morse code). So let's leave it as it is.

We have now described the inputs to the machine and the outputs from it; we need one more feature to completely describe a Turing machine, namely various states that the machine can be in. The state the machine is in determines what output it will produce, and what state it will go into, when presented with a certain input. I'll say more about the states of the machine in a bit, but that's enough for now.

Now let us consider how to "program" a Turing machine. There are a number of different, but equivalent, ways to specify a program for a Turing machine. We can provide a list of instructions, or a machine table, or a flow graph. Any of these can provide the same information about the machine. To see how they work, let us consider a very simple program, one designed to produce a machine which will, when started on a blank tape, write three ones and then stop.

lists of instructions

To begin with, we can provide the following set of instructions for the machine:

  1. If you are in state 1 and see a B, print a 1 and remain in state 1.
  2. If you are in state 1 and see a 1, move one square to the right and go into state 2.
  3. If you are in state 2 and see a B, print a 1 and remain in state 2.
  4. If you are in state 2 and see a 1, move one square to the right and go into state 3.
  5. If you are in state 3 and see a B, print a 1 and remain in state 3.

We can simply stop the instructions here; the result will be that when the machine is in state 3 and sees a 1 it will halt, since it has no instructions telling it to do anything else.

machine tables

We can capture exactly the same information about the machine by constructing a machine table. Let us list possible inputs to the machine along the side of the table, and the states the machine may be in across the top. Then the machine just described will look like this:
 

  state 1 state 2 state 3
B write a 1; stay in state 1 write a 1; stay in state 2 write a 1; stay in state 3
1 go right; go to state 2 go right; go to state 3 [do nothing] 

The effect of leaving the last box blank, as before, ensures that the machine will simply stop after printing the third one.

flow graphs

The final way in which we can represent the machine's instructions, and in some respects the most illuminating one, is to write a flow graph. A flow graph for the machine we have been considering might look like this:

flowchart.gif (2054 bytes)

Here circled numbers represent states of the machine; the arrow indicates what states the machine moves from and into; and the B:1 written over the first arrow (for example) means that if the machine is in state 1 and "sees" a B (that is, a blank), it prints a 1. The arrow shows that, having done so, the machine stays in state 1. The 1:R over the second arrow means that if the machine is in state 1 and "sees" a 1, it moves one square to the right; meanwhile the arrow shows that, having done so, it moves into state 2. And so on.

What is the Point?

There are many possible points to a discussion of Turing machines; they are of special interest for computer science and for logic. But our purpose is to clarify the conception of the human mind involved in functionalism. And for this purpose, the crucial thing is the nature of the states of the Turing machine. To put it in a nutshell, Turing Machine states are fully definable in terms of inputs, outputs, and other machine states. Nothing whatsoever needs to be said about the nature of the machine's construction; one could use many different sorts of material and many different kinds of construction. But to completely explain a Turing machine state one need only say what the machine will output if it is in that state and receives a certain input, and also how it can get into that state and what other states it can go into from it. For instance, in our first example (the machine that prints three ones), state 1 of the machine is just the state in which the machine, if it sees a B, writes a 1 and stays in the same state, and in which, if it sees a 1, it moves one square to the right and goes into the next state. That's all there is to it: that's what state 1 is. So we can say that state 1 is functionally defined: what matters is what the machine does when it is in that state, not the mechanics of how it does it or what it is made of.

The fundamental idea of functionalism about the human mind is that mental states are definable in just the same way: what matters is not the details of our neurophysiology, but rather how we function when we are in a particular mental state. In the case of people, the inputs are perceptual and the outputs are behavioral, but the basic idea is the same, namely: mental states are definable in terms of (perceptual) inputs, (behavioral) outputs, and other functional states.

Clearly, there are similarities between functionalism and behaviorism. The difference is that functionalism takes mental states more seriously than behaviorism. Notice that we cannot define a state of a Turing machine just in terms of inputs and outputs: it is essential to also include the relations between different states. Functionalism's main difference from (philosophical) behaviorism is that it thinks the same thing is true of mental states: they cannot be fully defined in terms of behavior, or even in terms of relations between stimuli and behavior: the definition of a mental state must also make reference to other mental states. (It thus acknowledges the force of the criticism of behaviorism discussed by Churchland on pp. 24-25.)

Does this make the attempt to explain mental states functionally circular, since the explanation of any particular state will presuppose other mental states? It certainly sounds circular to say that a mental state is definable in terms of inputs, outputs, and other mental states: it looks like the notion we want to define reappears in the definition. But in fact it is not circular. The idea is that we could simultaneously define all the states at once, in which case they would all be defined in non-mental terms. Again, note the parallel with Turing machine states: a definition of any of them will mention at least some of the others, but we could define them all simultaneously and without circularity: the states are those conditions x, y, and z of the machine such that if the machine is in x, it will . . . and if it is in y, it will . . . (and so on).

Second Source: Defining Theoretical Terms

The second source of functionalism is a general view about the meaning of theoretical terms generally, not just theoretical terms for mental states. The general idea is that theoretical terms are implicitly defined by the theories in whose formulation they figure. For instance, suppose my entire theory about glurks is that they are round, large, and were dropped on Earth by the Martians. That exhausts all of my beliefs about glurks. Then, the idea goes, the meaning of the term "glurk" is just: the things which are round, large, and were dropped on Earth by the Martians. If there are no such things, then there are no glurks; if there are such things, then they are what "glurk" refers to. (Churchland describes this view of the meaning of theoretical terms as the "network theory of meaning" on p. 56, and considers its potential application to psychological terms on pp. 58-59. We haven't read that yet, but will, and you might find it helpful to look at it now.)

David Lewis has given a more detailed and precise account of this notion of meaning. Let me illustrate how Lewis's account works by means of a simplified example. I will explain how Lewis defines theoretical terms in a series of stages; we begin with a miniature common-sense psychological theory. (The general idea is that terms in ordinary language, like "belief" or "desire" or "hunger," should get their meaning from our common-sense theories about them--since we all know what they mean, and common-sense theories are all most of us have about them.)

Step 1: Miniature Psychological Theory

Consider, then, this miniature psychological theory:

Step 2: Make All Theoretical Terms Names

For convenience, we reformulate the theory so that all the psychological terms occurring in it are names:

Step 3: One Big Sentence

Turn the theory into one long conjunctive sentence:

Step 4: Define a Predicate

Now, we can define a two-place predicate, T. To see how this is done, digress for a moment. Consider the sentence

We can consider this sentence as having three parts: the names 'Joe' and 'Herbert', and the two-place predicate '_____ is a foot taller than _ _ _ _'. If we call the predicate F, then we can write our initial sentence as F(Joe, Herbert). Similarly,

would be written as F(Nancy, Susan): same two-place predicate, but different names.

We can do exactly the same thing with our miniature theory. We can think of it as consisting of the names 'hunger' and 'sleepiness' and the two-place predicate:

If we abbreviate this long two-place predicate as T, then we can write our theory as T(hunger, sleepiness). Lewis calls this the postulate of our initial theory.

Step 5: Ramsey Sentence

Now we can write the Ramsey sentence of our theory. The Ramsey sentence of F(Joe, Herbert) would be the sentence

which says that there are people x1 and x2 such that the first is taller than the second. (The notation ∃, taken from first-order logic, represents the "existential quantifier." You can read "x1" as: "There exists something, call it x1, such that . . .")

Similarly, the Ramsey sentence of T(hunger, sleepiness) is the sentence

which says that there are two states which satisfy the theory--that is, that there are two states, x1 and x2, such that when someone has x1 and sees food, he eats it and acquires x2 (and so on).

Clearly, anyone who thinks the original theory is true will think its Ramsey sentence is true.

Step 6: Modified Ramsey Sentence

We can now introduce the notion of a modified Ramsey sentence. I will write the modified Ramsey sentence of F(Joe, Herbert) like this:

This says that there is exactly one pair of things x1 and x2 such that the first is taller than the second. Notice that we could very well believe that F(Joe, Herbert) without believing that (!<x1, x2>) F(x1, x2). But Lewis claims that if our psychological theory T(hunger, sleepiness) contains everything we know about hunger and sleepiness, and we use the terms 'hunger' and 'sleepiness' as names of psychological states, then we must think that there is only one pair of states of which the theory is true. We must think, that is, that

Step 7: Define the Theoretical Terms

We now have all the materials we need to explain how Lewis thinks psychological terms are defined. Lewis claims that there are two statements which together capture all there is to capture about the meaning of our psychological terms.

The first of these two sentences is the modified Carnap sentence:

This says that if there is exactly one pair of states such that everything the theory says is true of them, then those states are hunger and sleepiness.

The second sentence is, roughly, this:

(We are using to refer to the empty set.) That is, if there isn't exactly one pair of states of which the theory is true, then there is no such thing as hunger or sleepiness. (This will be so either if there is no pair of states of which the theory is true or if there is more than one such pair.)

These two statements together, Lewis points out, amount to a definition of 'hunger' and 'sleepiness', since they are logically equivalent to:

It is a little trickier to define 'hunger' or 'sleepiness' separately. (This is related to the worry I mentioned above that functional definitions may appear circular.) But we can do it like this:

(We might think of this in English as follows. The definition

says that the ordered pair <hunger, sleepiness> is the one and only ordered pair such that the theory is true of its members. Now, hunger is the first member of the ordered pair <hunger, sleepiness>. So hunger is the first member of the one and only ordered pair of whose members the theory is true--and this is just what the definition of hunger above says.)

Functionalism

Lewis's view about the meaning of theoretical terms, applied specifically to ordinary-language psychological terms, yields something very similar to the account we have seen that a consideration of Turing machines leads to. In fact, we can apply Lewis's method for defining theoretical terms to the description of a particular Turing machine "program" to yield a functional definition of the machine states involved. We would simply begin with the list of instructions for the machine in place of the miniature theory I began the example above with, and go through precisely the same steps as before.

Let me repeat the characterization of functionalism mentioned earlier. Functionalism is the view that mental states are functional states, where a functional state is a state definable in terms of inputs, outputs, and other functional states.

Lewis's view does not actually restrict the terms in which a mental state can be defined to inputs, outputs, and other functionally defined mental states, so in a sense his version of functionalism is more general than the one I have just given. But it is very similar, since it seems that the common-sense psychological theories out of which Lewis formulates his definitions will contain mainly references to such things as behavior and perception. (A slight complication is that Lewis himself argues that his procedure for defining theoretical terms provides a defense of the identity theory, rather than of functionalism. But the reductions Lewis expects are what Churchland calls "domain-specific" (see pp. 41-42), so that pain in a human might be one sort of physical state while pain in a Martian might be another. So, for Lewis, functionalism provides an account of the essential nature of mental states; it is just that Lewis expects the identity theory to be able to specify what sort of physical mechanism performs the relevant function in a given type of organism. As Churchland says, "functionalism is not so profoundly different from the identity theory as was first made out.")

Functionalism and Qualia

The suspicion persists, with functionalism as with other forms of materialism, that something is being left out. One way of describing what one fears is left out is by saying that functionalism does not seem to account for our own inner experience of our mental states. In this context, philosophers often speak of qualia (otherwise known as "raw feels"). Qualia are supposed to be the way things are experienced by us, the raw conscious awareness associated with a given mental state. For instance, I might get a certain quale when looking directly at a particular orange shade in normal light.

Now, the suspicion that qualia are simply left out or not accounted for by functionalism may be expressed by saying that it seems as though two people could be functionally identical and yet have different qualia. A specific version of this suspicion is the inverted spectrum problem, namely the view that it is possible that two people could have radically different visual experiences when looking at the same colors: for example, one might always have an experience just like the other would have if looking at the complementary color. It seems entirely intelligible that this could happen even though in every functionally describable way the two people were identical. Suppose Fred has experience x when he looks at something red and experience y when he looks at something green, while Fgreen has experience x when he looks at something green and experience y when he looks at something red. Then both Fred and Fgreen will associate the word 'red' with red things and the word 'green' with green things, even though they associate different experiences with these colors. It seems that they could be so outwardly similar that there would be no conceivable way to tell that they were having different qualia. (For example, asking them will do no good at all, since the only words they have to describe their qualia are borrowed from the objective properties the qualia are associated with. Fred will call x an "experience of red" and Fgreen will call y an "experience of red", so that if you show both of them something red and ask what experience it produces, they will both say: "an experience of red," even though they are having qualitatively different experiences.)

How should functionalism respond to the problem of inverted qualia? There are two main possibilities. First, one could argue that, despite the fact that the inverted qualia situation seems possible, it is not really possible. As we have seen before, what one can imagine is not necessarily a good guide to what can actually happen. And one might construe experiments with inverting goggles as supporting the view that, as two people become more functionally similar, their experiences also become more similar. (For an interesting discussion of the inverted goggles experiments and their bearing on this sort of issue, see Stephen N. Thomas, The Formal Mechanics of Mind, Cornell University Press, 1978, pp. 194-212.)

Second, one could concede that functionalism cannot account for qualia, but argue that in fact qualia are so unimportant in psychological explanation that this is no great loss. For example, it is tempting to think that the word "pain" is simply a name for a certain sort of quale. But there are so many different sorts of experiences that get classified as pain that what ties them all together must be something like their functional role rather than an any essential similarity in their qualitative character. (Also, conversely, the very same qualitative character may count as a different mental state in different contexts, if its functional role is different in the two cases.) If so then, even if functionalism leaves qualia out, it does not leave out sensations like pain, pleasure, and so on, since these are defined not in terms of their qualitative character but in terms of their functional role.

Of course, even if leaving out qualia is not leaving out any particularly central part of our mental apparatus, it still seems to leave out something (if the inverted qualia case is possible). And this may seem to leave the door open for dualism, even though the parts of our mental life dualism was invoked to account for would not be very important. (Indeed, Frank Jackson has argued for this view in an essay entitled "Epiphenomenal Qualia;" see Churchland's reference on p. 35.) A different suggestion is offered by Churchland: maybe functionalism is right about mental states other than qualia, but the identity theory is right about qualia themselves (compare Churchland's discussion on p. 40).

Review of the Main Approaches to the Mind-Body Problem

We can think of the ontological question in the philosophy of mind, namely the mind-body problem, as essentially the question: "What is common to all _______s in virtue of which they are _______s?" (where the blanks are to be filled in with the name of some sort of mental state--pain, say, or belief). Or, to put essentially the same question in a different way, "What is the essential nature of _______?" Most of the views we have discussed can be understood as responses to this question.

Dualism of course gives the answer that our mental states are states of a nonphysical substance. But dualism is not well enough articulated to provide answers to the more specific questions listed above. (It is a good idea to keep this in mind, especially if you find the various materialist answers unpersuasive: dualism simply offers nothing in response to specific questions about the nature of mental states, though it does give an answer to the very general question of what sort of stuff is responsible for mental states.)

Behaviorism answers our question in a word, namely: behavior. What is common to all pains in virtue of which they are pains, for example, is that people in pain tend to act alike.

The (type-type) Identity Theory says that neurophysiology is what all pains share that makes them pains. To be in pain is (to radically oversimplify) to have one's C-fibers firing. (Notice that the token-token identity theory doesn't answer this question: the question is about types of mental states, and the token-token identity theory is a theory only about tokens of mental states; it says nothing about types. The token-token identity theory is compatible with functionalism, and even with behaviorism.)

Functionalism, finally, says that what all pains share in virtue of which they are pains is their functional role--namely, their relation to inputs, outputs, and other functionally defined mental states. Since this handout is mainly concerned with functionalism, this may be a good place to quit.



Last update: September 18, 2009. 
Curtis Brown  |  Philosophy of Mind   |  Philosophy Department  |   Trinity University
cbrown@trinity.edu