Searle: Is the Brain’s Mind a Computer Program?
We can distinguish between three ways of ascribing intentional states
to things:
-
A thing may have intentionality merely metaphorically, e.g., as
in "The sun is trying to peek through the clouds."
-
A thing may have intentionality derivatively . For example, we say
that the newspaper "said that Joe won the race." But the newspaper does
not
itself have any intentionality per se. It gets its intentional
features from those of the people who use the words written in the newspaper.
-
A thing may have intentionality originally or intrinsically when
its intentional states do not derive from anything else's intentional features.
Unlike in the case of the newspaper, when we say that Joe wants a
drink, we ascribe to him his own desires and other mental states.
We sometimes ascribe intentional states to computers and other man-made
devices, e.g., when we say that our chess playing computer wants to take
the bishop. But our chess playing computer probably does not have any original
intentionality because it's too simple a device; at most, it has only a
derivative intentionality depending on that of its programmers.
The functionalist, however, believes that if we have a computer running
a sophisticated enough program, then the computer will have its own original
intentional states just in virtue of running those programs. This
is the view Searle wants to argue against.
1. Two types of AI:
-
Strong AI: A machine can think just in virtue of implementing a computer
program because the program itself is constitutive of thinking. In
particular, a program passing the Turing test is a mind.
NOTE: this is a view to which functionalism is committed.
-
Weak AI: computer models of minds are useful tools, but just as a model
of the weather is not the weather, so one of the mind is not the mind.
Simulation
is not duplication.
2. Attack on strong AI
A. The Chinese Room Analogy. Note that in the setup, the
manipulation of symbols is totally syntactical.
Some objections and replies to the Chinese Room analogy:
-
Objection: you understand unconsciously.
Reply: syntax alone gives no semantics
-
Objection: the room as a whole understands.
Reply: memorize everything; you still don't understand.
Duplication: The reply misses the point. Some computing
systems run software that enables them to emulate other operating systems,
and software written for those other operating systems; for example, a
MAC OS can emulate Windows 95. Although the Windows software is somehow
incorporated in the MAC OS, the states of Windows need not be those of
the MAC OS. For example, Window may crash while the MAC OS
(including the emulator) continues to work. When you memorize all the instructions
in the Chinese book, you become like the Mac software, and the Chinese
room software becomes like the emulated Windows software. Although you
fully incorporate the Chinese room software, you needn't share all the
states of the Chinese room software. The Chinese room software may crash
while you may continue well. So, if the Chinese room software is in a state
of believing that it's 3:00 o'clock, you needn't be in that state. In particular,
for the Chinese room software to understand some Chinese symbol, it is
not required that you also understand that symbol.
-
Objection: if the room simulated the actual workings of man understanding
Chinese, it would understand.
Reply: simulation is not duplication.
-
Objection: light electromagnetic even if shaking a magnet in dark
room gives off no visible light. Similarly, syntax might constitute
semantics even if shaking symbols in room gives off no visible semantics.
Issue is empirical.
Reply: nego paritatem: electromagnetic story is causal;
symbols have no causal power: they only cause (?) the next step in program.
B. The formal argument.
-
computer programs are formal (syntactic); as such:
-
they are abstract notions which can be implemented in various media
-
they manipulate symbols without any reference to meanings: the program
has syntax but no semantics.
-
human minds have mental contents (semantics).
-
syntax by itself is neither constitutive nor sufficient for semantics.
NOTE: This is taken as a logical truth because the same syntactical
system admits of infinite semantics.
-
Hence, programs are neither constitutive nor sufficient for minds.
-
So, strong AI is false.
NOTES:
-
It doesn't follow that a computer cannot think. For example, brains
think and are (trivially) computers because any system can be interpreted
as a computer (Set window open = 1; window shut = 0, and you have a computer).
Searle's point is that brains cause mental events because of their neurobiological
processes.
-
It doesn't follow that only biologically based systems can think: for all
we know, a silicon computer might think, but not qua implementing
a program.
-
whether a program is run on serial or parallel computers irrelevant because
they are computationally equivalent.
3. Searle's positive view
Searle is not a substance dualist. He believes that brains cause
minds, but minds are not things different from brains; rather, minds are
causally emergent properties of neural activity (no mentality in single
neurons). Hence, any system capable of causing minds must duplicate
the specific causal powers of the brain, and this cannot be done merely
by running a formal program. At times he seems to toy with epiphenomenalism,
and yet he rejects that view.
4. Two connected reasons for resistance to Chinese room:
-
behaviorist residue in Turing test: belief that psychology must restrict
itself to external behavior and that therefore simulation entails duplication.
-
dualist residue: strong AI allows one to deny the mind is as biological
as digestion.
5. A development: Searle now believes that strong AI is not wrong,
but incoherent. There is no syntax without a subject considering
a physical system as a syntactical system. Hence, a computer
without a subject considering it a syntactical system is but a physical
system. As “semantics not reducible to syntax, so syntax is not reducible
to physical system”.