Talk:Chinese room argument

From Scholarpedia
Jump to: navigation, search

    Layman's Comments (Harry Johnston)

    It seems strange to me to define "thinking" and "understanding" in terms of their (supposed) cause, i.e., the mind, rather than in terms of their effects, particularly when contrasted with the assertion that it might be possible to create a mind by other means than neurons. Other abstract processes, such as, for example, addition, are usually defined in terms of their effects. Perhaps this could be clarified somewhat?

    I have some concern that the argument about memorising the instructions is presented with no mention of the standard counterargument, i.e., that there is nothing in principle preventing a single brain from having two minds, only one of which "understands" Chinese.

    The meaning of the sentence beginning "The brain is above all a causal mechanism" is not clear to me. What is it precisely that the brain is supposed to be causing in this context? (For context, I would have supposed that the function of the brain is to cause the body to respond to the environment in an intelligent manner. Clearly this is not what Dr. Searle intends.)


    Reviewer A

    Overall, this is an excellent, concise statement of a key idea in the philosophy of cognitive science and artificial intelligence, by its author. However, there are several areas in which changes would result in an improved entry.

    INTRODUCTION

    "The Chinese Room Argument is a refutation of..." Given the persistent contentious status of the argument, and the intention for Scholarpedia to be neutral, this should be changed to: "The Chinese Room Argument aims to refute..." Similarly for "Strong AI is refuted by a simple thought experiment."

    It would be helpful if a reference or two could be given to works that put forward the view that the Chinese Room argument aims to refute. Specifically, references that make the claim that for any (or even some) mental state, there exists a program such that *any* implementation of that program will have that mental state, and/or references that make the claim that computers can have mental states *solely by virtue of* implementing the right program.

    STATEMENT Of THE ARGUMENT

    "I, in the room, know none of this." This is a little sloppy, since there are several things in the foregoing that the author does know: That he is a native speaker of English, that he understands no Chinese, that he is locked in a room with boxes of Chinese symbols, together with a book of instructions in English for manipulating the symbols, etc.

    The Scholarpedia guidelines say that the article should not "...contain "I", "we", "our", or other first- and second-person pronouns. Encyclopedic articles just state the facts. (20 years from now somebody else will be the curator of your article, so using "I" or "we" will be confusing or misleading.)" In any case, the perspective should be regularized; as it stands, the author sometimes uses first person, and sometimes third (e.g., "the man in the room").

    Mentioning the Turing Test in the initial statement of the argument might confuse readers, since the view that the argument is attacking is not, in the first instance, the (behaviourist) position that sameness of behaviour is sufficient for sameness of mental state, but the (cognitivist) position that only sameness of (behaviour and) functional/computational process is sufficient for sameness of mental state. Since the Turing Test is discussed later, it can perhaps be dropped here.

    Elsewhere the author has referenced computationalist work (Cantwell Smith, Goel, Batali, etc.) that questions Premise 1, specifically that "computers operate purely by manipulating formal symbols". It would be good to reproduce those qualifiers here.

    Even on current notions of computation, it is not generally conceded that "there is nothing to the program qua program but its syntactical properties"; rather, the standard view is that something only counts as syntactic if it also counts as semantic (cf Crane). Thus, programs, qua programs, are both syntactic and semantic. It would help the author's case and improve the article if reference could be made to specific passages of pro-computationalist texts that explicitly embrace the purely syntactic, non-semantic view of programs, to show that the argument does not have a straw man as its target.

    The discussion of premise 2, while sound, is a bit rushed and sloppy, which may confuse the reader. Surely the Chinese symbols the man in the room is manipulating have both syntax and the appropriate semantics. So in some sense the man "has" the appropriate semantics, in that he "has" Chinese symbols with the appropriate semantics. What should be made clear is that the position being attacked is that computational states are sufficient for mental (as opposed to linguistic) semantics, So what should be said is that unlike in the case of normal language reception and production, the man in the Chinese room does not have mental states that have the same semantics as the language being received or produced.

    It might be less confusing to the reader if the author were to point out that the cognitivist position behind Strong AI also, like the author, maintains that "the Turing Test, or any other purely behavioral test, is insufficient to distinguish genuine cognition from behavior which successfully imitates or simulates cognition".

    It is perhaps misleading to say that the Chinese room argument merely "rests on" the principle that syntax is not semantics -- indeed, the thought experiment part of the argument attempts to establish this principle. For this reason, it may also be misleading to say that the thought experiment merely "illustrates" the principle, when instead it is a way of arguing for the principle's truth.

    The same goes for the second principle, concerning simulation: although the argument as a whole may rest on it, the most famous part of the argument, the thought experiment, is best seen as trying to support/motivate/argue for it. On the other hand, it that it appears to be derivable from/is a re-statement of the first principle, it is not clear that designating it as a separate principle is an illuminating tactic.

    The author undermines the clear structure of the entry so far by introducing, in passing, some claims that go beyond the argument: "Because we know that all of our cognitive processes are caused by brain processes, it follows trivially that any system which was able to cause cognitive processes would have to have relevant causal powers at least equal to the threshold causal powers of the human brain. It might use some other medium besides neurons, but it would have to be able to duplicate and not just simulate the causal powers of the brain."

    These claims are not supported by the Chinese room argument itself, so perhaps they should be clearly demarcated as such. Perhaps there should be a separate sub-section for a discussion of these issues. In particular, it is not in general true that if processes of type A cause B, then all processes that cause B must be of type A. Or rather, the sense in which it is trivially true cannot do the work the author wishes it to do. I would be willing to go into more detail about this if the author wishes.

    THE SYSTEMS REPLY

    Again, for neutrality's sake, the sentence "There have been a rather large number of discussions and attacks on the Chinese Room, but none have shaken its fundamental insight as described above" should be changed, perhaps to: "There have been a rather large number of discussions of and responses to the Chinese room argument." "Responses" or "objections" is preferable to "attacks".

    "Like a single neuron in a human brain" is an uncharitable version of the reply; better would be "like the hypothalamus in a human brain".

    "If one asks, Why is it that the man does not understand, since he passes the Turing Test?" This is an incomplete sentence. Better: "Suppose one asks:..." More importantly, the question, as phrased, is irrelevant to Strong AI/cognitivism, as it invokes behaviourism/the Turing Test. Why not: "Suppose one asks: Why is it that the man does not understand, even though he is running the program that Strong AI grants is sufficient for understanding Chinese?"

    The author offers two responses to the Systems Reply; one concerning learning, and one concerning the man in the room memorizing the rulebook, etc. The latter is contested, but fine as is; the former, however, requires some clarification. "The man has no way to learn the meanings of the Chinese symbols from the operations of the system, but neither does the whole system." This is confusing, since the notion of learning plays no obvious role in the claim of Strong AI nor the Chinese room argument. Yes, Strong AI claims that if a computer that didn't understand Chinese were to implement such-and-such program, then it would now understand Chinese, but this does not mean that the program is a model of how humans learn Chinese. So that the room hasn't learned Chinese is that sense is irrelevant. If, on the other hand, all that is meant by "learning Chinese" is the weak sense of going from a state in which Chinese is not understood to one in which it is, then to insist that the system (man plus room) hasn't learned Chinese isn't a response to the Systems Reply; it is merely to gainsay it.

    THREE MISINTERPRETATIONS

    I'm very pleased that the author included this section, as I have found that many people, including a large number of those who should know better, misconstrue Searle's position in just these ways, and I frequently have to refer them to the corresponding passages in his work. This section is very clearly stated and well-argued, except for one passage: "The brain is above all a causal mechanism and anything that thinks must be able to duplicate and not merely simulate the causal powers of the causal mechanism." It is not because the *brain* is causal that, on the author's view, computation is insufficient for mind. It is because *mind* is the kind of thing for which the causal, and not only the formal, properties matter that not everything that is formally equivalent to a brain will reproduce mind. Also see the point made before: it is not in general true that if processes of type A cause B, then all processes that cause B must be of type A. Nor does the Chinese room argument show that it is true for the special case of B = understanding. The argument at most shows that sharing formal properties alone is insufficient to reproduce mentality. This is not the same as showing that causal powers that produce mind must be the same, in any sense other than them both being sufficient for producing mind.

    A BRIEF HISTORY OF THE ARGUMENT

    A useful section, but it would be more illuminating if the author could connect the argument to historical precedents. For example, in what ways is it, and is it not, presaged by Leibniz' analogy of the mill?

    Also, the author says he was a lecturer at Yale at the time, but was flying to Yale to give a talk? Where was he flying from, then?

    Since this section is so crucially first-person, perhaps the break with the rest of the third-person article could be noted with an explicit phrase, such as "Searle recounts the origins of the argument as follows: "... ""

    "And only someone with a commitment to behaviorism or a commitment to an ideology that says that the brain must be a digital computer ("What else could it be?") could still be convinced of Strong AI" Once again, this misleadingly suggests that Strong AI is a form of behaviourism, when really it (and cognitivism) were introduced in opposition to behaviourism. Strong AI denies that anything that behaves like a human has the mental states of a human; it must generate that behaviour in the right way.

    Ditto for "The Robot Reply... seems to exemplify the behaviorism that was implicit in the whole project of assuming that the Turing Test was a conclusive proof of human cognition."

    Some of the further development of the argument should be mentioned. E.g., something like: "In Searle 19xx the argument was extended to apply to connectionist AI as well (the Chinese gym), which was responded to by Churchland and Churchland", etc.

    REFERENCES/RECOMMENDED READING/EXTERNAL LINKS/

    They references are a little unbalanced; a few of the more important papers opposing the argument should be singled out to help the reader; neutral or pro-argument papers not by Searle would also be welcome.


    Personal tools
    Namespaces

    Variants
    Actions
    Navigation
    Focal areas
    Activity
    Tools