Notes on John Searle, The Rediscovery of the Mind
Chapter 9

Searle argues that thinking cannot be computation. There are several reasons for this.

The basic idea of cognitive science is often described as the idea that the mind is to the brain as software is to hardware. If this is correct, then the study of the mind is essentially the study of what "programs" we are following in reasoning and other mental processes. Given that the same programs can be run on very different machines, cognitive science on this view will not be particularly concerned about the nature of the brain (the "hardware" or in this case "wetware"), but only on the information processing it performs. This is closely related to the functionalist idea that, since functional states are "multiply realizable," the study of the mind should focus on the mind's functional organization, not on the precise neurobiological implementation of that organization in humans.

Searle thinks that all of this is a deep mistake. He offers four main criticisms, in addition to the criticism he earlier offered in his "Chinese Room Argument," which I will list here as 0.

0. Semantics is not intrinsic to syntax. (The Chinese Room argument.)

The idea here is that even if the mind is an information processing device, that can't be the whole story: even if we are running programs, we can't have intentional states simply by virtue of running a particular program. Why? Because programs are just manipulations of syntactic units, and no amount of syntactic manipulation can provide a semantics: moving symbols around according to rules can't give those symbols meaning.

1. Syntax is not intrinsic to physics.

Searle begins by arguing that if minds are "multiply realizable" then they are universally realizable; that is, if my psychology could be implemented in a computer or a martian, then it can also be implemented in a stomach or a wall.  (This is related to his view in chapter 7 that naturalistic theories of intentionality have the consequence that everything is intentional.) Hilary Putnam had earlier raised a similar objection. 

I think the best response is to consider a detailed account of what realization or implementation consists of; once the implementation relation is precisely defined, it will be obvious that it isn't true that every physical object  implements every program. This project is carried out convincingly, in my view, in David Chalmers, "Does a Rock Implement Every Finite-State Automaton?" (Putnam had argued that the answer is "yes." Chalmers responds by defending the answer "no.")

Chalmers' definition of implementation for a "Combinatorial-State Automaton":

A physical system implements a given CSA if there is a decomposition of its internal states into substates [s^1, s^2, ..., s^n], and a mapping f from these substates onto corresponding substates S^j of the CSA, along with similar mappings for inputs and outputs, such that: for every formal state transition ([I^1,...,I^k],[S^1,...,S^n]) -> ([S'^1,...,S'^n],[O^1,...,O^l]) of the CSA, if the system is in internal state [s^1,...,s^n] and receiving input [i^1,...,i^n] such that the physical states and inputs map to the formal states and inputs, this causes it to enter an internal state and produce an output that map appropriately to the required formal state and output.

But his real worry turns out to be different and more basic. The worry is that "syntax is essentially an observer-relative notion." The idea is that a brain or computer process is only a manipulation of syntactic tokens relative to an interpretation from outside the system; from the point of view of physics it's just neurons firing or circuits opening and closing.

2. The homunculus fallacy.

This is closely connected to the first objection. If syntax is observer-relative, then if we say that the brain is performing syntactic operations, we need to say relative to what observer it is doing so. But no one, including the subject, is observing what's going on in the brain. So the temptation is to assume that there is an observer of some sort inside the brain: a homunculus, if you will.

3. Syntax has no causal powers.

The idea here is roughly that physics and biology and chemistry are where causation goes on; to the extent that syntactic processes "cause" anything, this is only because something biological is doing the real causing.

Hmm. Well, it's certainly true that abstractions do not have causal powers. So in particular a program, which is an abstract entity, cannot cause anything to happen. (Any more than the number five can cause anything.)

However, an implementation of a program has real physical states standing in causal relations to one another. So a program implementation has causal powers.

Does a program implementation have the powers it does because it implements a particular program? Or because of the specific biological or physical or chemical properties of the implementation? Well, in a sense both, no? In some sense the causing goes on at the biological (or whatever) level. But the program can be informative in the sense that other implementations of the same program would have caused states that had the same interpretation. (My income tax program causes my printer to print a copy of my income tax return? Not exactly; the program is an abstraction that can't cause anything. The implementation of the program (the instance running on my computer) is what causes the printer to produce a copy of my return. Moreover, in some sense it does so because of purely physical reasons (current flowing through circuits). But suppose I restarted my computer and booted up my dual-boot machine under Linux instead of Windows. If I ran a Unix version of the program the physical states of the computer would not be exactly the same as they were under Windows. But the Unix implementation would also cause my printer to print my tax return. So describing the machine states as an implementation of the tax program rather than describing them in purely physical terms allows me to capture generalizations I could not capture in purely physical language. (Basically this point is made by Frank Jackson, who talks about "program explanations" in a number of writings.)

The issue of causation at various levels of explanation, including mental causation, is a very rich, tricky, and interesting topic. One good discussion among many (and more accessible than most) is Jaegwon Kim, Mind in a Physical World (MIT Press, 1998).

4. The brain does not do information processing.

According to Searle, descriptions of the brain as processing information are at too high a level of abstraction to capture anything interesting.



Last update: April 26, 2008. 
Curtis Brown  |  Philosophy of Mind   |  Philosophy Department  |   Trinity University
cbrown@trinity.edu