Tery Winograd, Stanford University
Pre-final DRAFT of a chapter to appear in a volume on John Searle's Chinese Room argument, edited by John Preston and Mark Bishop, to be published by Oxford University Press
In a way, this paper is an argument not against Searle, but against the grounds on which the debate about his position has been conducted. I argue that the question Searle poses, "Does the computer understand?," is a meaningless question when it is stripped of an appropriate context of utterance.
In his writings on artificial intelligence, in particular in his "Chinese Room" example, Searle attacks two claims: that AI can "explain" human cognitive ability, and that a computer can be said literally to "understand" and have other cognitive states. I find myself in accord with his skeptical attitude towards both claims, but for reasons that do not correspond to his arguments. Although much has been written in response to his opinions, little agreement has been reached about the fundamental issues. This should not be surprising, because the questions as he posed them are simply not the kind of coherent questions that allow for objective answers. In framing the debate, Searle made contestable assumptions about the use of language. Put simply, I argue that it is an error to suppose that there is a right or wrong answer to the question of whether the computer (or the Chinese room) "understands" language. The error is not one of flawed logic in the argument about artificial intelligence, but is more fundamental and more pervasive. It has to do with the basic orientation we take towards the truth or falsity of statements in natural language.
Throughout the paper, Searle adopts the naive view that "understand" can be understood as a straightforward two-place predicate that there is some objective sense in which it "really is the case" that X understands Y or X doesnt understand Y. In presenting examples, he makes liberal use of loaded modifiers, such as "obvious", "perfectly", and "certainly", as in "it seems to me quite obvious in the example that I do not understand a word of the Chinese stories" (Searle 1980, p.418 (p.70 in Boden)), "what is it that I have in the case of the English sentences that I do not have in the case of the Chinese sentences? The obvious answer is that I know what the former mean, while I havent the faintest idea what the latter mean" (ibid., (p.71 in Boden)), and "the man certainly doesnt understand Chinese, and neither do the water pipes . . ." (ibid., p.421 (p.78 in Boden)).
Searles "just-plain-old-obvious-common-sense" posture can be a useful antidote to sophistry, but in this case it conceals the real issues. His interpretations of the counter arguments all suffer from his unexamined adherence to the assumption that "understand" should be treated as an observer-independent predicate. He explicitly states "In cognitive sciences one presupposes the reality and knowability of the mental in the same way that in physical sciences one has to presuppose the reality and knowability of physical objects" (ibid., p.422 (p.80 in Boden)).
We need to understand "understand" in a different way (which Searle may then choose not to call "cognitive science"). As an analogy, consider the terms "man" and "woman". For most people for most of history, this has seemed like a clear objective distinction, rooted not in interpretation but in biology. We deal every day with paradigmatic cases of gender with central and unproblematic uses of the terms. Now consider a victim of a rare disease such as testicular feminization, whose external characteristics are those of a woman, but whose chromosomal pattern is male and whose reproductive organs are not developed for either gender. People with a certain scientific attitude would say "If the chromosomes are XY it is really a man. Everything else is just appearances". Others with a more social-operational approach would say "If she looks, acts, and views herself as a woman, she is really a woman. The rest is just biomolecular technicalities". Others would say "If there are no developed reproductive organs, it is neither a man nor a woman, and we have to give up the Boolean either-or nature of the categories".
It is unproductive to ask "Which one of these three is really right?". Each is right for certain concerns and from certain perspectives. Each is wrong for others. The problem is that our eons of experience have led us to expect all of the factors (visible characteristics, reproductive potential, and genetic composition) to combine in a regular way. Our language isnt prepared to deal with a new situation in which they come unglued. That is exactly the problem that arises when we try to say that computers do (or dont) "understand". We have clear intuitions that mechanical things (such as pencil sharpeners) dont understand, and that human beings of normal intelligence do. But the computer is a mix-and-match, like the ambiguously-gendered person. The familiar terms just dont fit, no matter how hard we contort our arguments.
Even a casual examination of the use of "understand" in everyday discourse reveals that its applicability is not "obvious." For the sake of this examination of statements such as "X understands Y," we will consider only cases in which X is a person and Y is a linguistic text or utterance. Many additional interesting issues are raised by utterances such as "Only Einstein really understood relativity", "My analyst doesnt understand me," and "I just cant understand what happened at Jonestown," but they are not central to this discussion.
Even in the case of understanding a linguistic object, there is no definable boundary between "understanding" and "not understanding". We can say things like "I read his dissertation but I didnt understand it". "Do you think he understood what you said about the hiring situation?", and "Its the kind of book a high school student can understand." There are cases where from one perspective we would choose to say that someone does understand something, while from another we would regard the same person as not understanding. As a native English speaker, I (in some obvious sense) understand a newspaper editorial. But if someone later points out its political undertones, I may say "Oh, I really didnt understand it".
This kind of phenomenon is not a rarity. Much of our basic vocabulary has the same property. If I ask "Was the crowd big?", the answer does not depend simply on the number of people, but on a background of comparison with some anticipation (particular to speaker, hearer, and situation) of what size would be expected. Similarly, "understand" carries with it an unspoken consensus between speaker and hearer. As a rough indication of understanding, we might say that X understands Y if, having heard (or read) Y, Xs potential for future action is changed in the appropriate ways. This paraphrase puts the weight of the speaker-hearer consensus on the word "appropriate". In some contexts, to use Schanks example, it is appropriate for someone who has heard a fragmentary account of an event in a restaurant to answer the question "What did John eat" with "a hamburge.r. In other cases (such as understanding a warning), there may be clear immediate actions that we expect, while in others (such as understanding a poem), the appropriate changes may be impossible to specify precisely. However, if a person reads a poem and his future potential for action (including mental and verbal action) is not in some way changed by it, we feel comfortable in saying "He didnt really understand it". Of course, for practical situations, there is no way of testing whether the appropriate changes have happened. One can make a finite number of specific tests (make observations, ask questions, etc.) but more could always be generated, and it is possible they would have conflicting results.
Everyday common sense urges us to believe that there must be a "right" answer, not dependent on purposes, context, and interpretation. Either the computer understands or it doesnt. But our everyday common sense also tells us that the ground beneath our feet is flat, and that there is an absolute frame of reference for the motion of objects. The common sense view works well as an approximation for the things we ordinarily encounter but this should not be taken as a proof that it will work with new phenomena. Naive common sense suggests that although there may be a fuzzy boundary on terms such as "woman" or "understand", they have an objective real meaning, which we see in cases that are "obvious". I argue on the contrary that every meaning is inherently contextual and open-ended, but that in many everyday cases the context is so firmly shared and taken for granted among the participants that there is an illusion of objectivity. Language is always an incomplete, partial, and ambiguous reflection of experience. Therefore the answer to the question "Does the computer understand?" is not an unequivocal "yes" or "no", but depends on the background of assumptions and interpretations, just as much as does the gender example.
"Understand" as an Orientation
In applying a predicate to an entity, one is implicitly committed to the presupposition that the entity is the type of thing to which the predicate properly applies. If we say that an idea is "green", we presuppose that it is the kind of thing that can have a color. Since in ordinary interpretation it is not, the result appears semantically ill formed. If the world fell neatly into distinct categories, this formal notion of semantic restriction would be adequate. But ordinary everyday language falls between this kind of formalistic object property categorization, and something that might be characterized as metaphor. If I say that a person or a bee or a volcano is "angry", I am slipping from a case in which it is a standard assumption that the predicate is appropriate to others in which I am using it to convey not only the sense of angriness, but also my orientation towards the object being characterized.
Much of the gut-level force of Searles argument against AI comes from the unstated recognition that in uttering a sentence containing mental terms ("understand", "perceive," "learn"), we are adopting an orientation towards the referent of the subject of the sentence as an autonomous agent. The issue is not whether it really is autonomous the question of free will has been debated for centuries and work in AI has provided no new insights. The issue is that in using mental terms, we treat the subject as an autonomous agent. In using the word "action", rather than "behavior" in the earlier paraphrase for "understand", this autonomy was implicit. The kind of thing that acts or understands is an autonomous agent.
There are many reasons why one can feel uncomfortable with the tendency to adopt the same orientation towards people (who are the prime exemplar for autonomous beings), and towards machines (or systems or organizations). It isnt that in doing so speakers are right or wrong, accurate or inaccurate, but that they are accepting (often unwittingly) attitudes and role relations that can have implications for social interaction. Although there is no context-independent answer as to whether something is autonomous, it is not simply an idle matter of opinion.
Some statements are subject to coherent grounding within the network of statements accepted generally within a society, and therefore are more likely to be agreed upon as correct than statements that contradict or go outside of that discourse. If it is not up to us as a linguistically bound social group to decide which entities are autonomous agents, then who is it up to? Is the sun an autonomous agent? A networked computer? A robotic toy such as Furby or Aibo? This is not an objective question, but a choice (not individual but within a linguistic community) of what we choose to approach as "autonomous". It is misplaced concreteness to say "It is autonomous" as though that were a matter of fact outside of our interpretive linguistic space.
When is it Appropriate to say that Someone (or Something) Understands?
Returning to an examination of the claims of AI, we can address them without buying Searles assumption that there is an objective sense in which words such as "understand" are correctly applied to some objects and not to others. The question is more a social one when is it appropriate (or to borrow a word from speech act theory, "felicitous") to characterize a situation as "understanding".
I find myself (along with most people I know) frequently using mental terms in talking about animals and machines. I find it effective to describe the behaviour of a computer program using statements such as "This program only understands two kinds of commands . . . " In this context, the appropriate future action by the machine is clear. To understand a command means to perform those operations that the user intends to invoke in typing the input that corresponds to that command. It is clear to me and to the person I address that other changes (such as getting impatient, or noting that I seem to give those kinds of commands often, and therefore treating me differently) are not needed in order to be appropriate in order to count as "understanding". This is not a derivative use of language involving some kind of metaphor. The word "understand" is being applied as literally as it ever could be: that is, relative to a background and context of assumptions. Any attempt to claim that it is being used "only metaphorically" or "incorrectly" flies against the facts of ordinary language use. "Literal" use, like "correct" use, is not an objective distinction, but a contextual judgment.
In most discussions of artificial intelligence, the situation is more problematic, since there is rarely sufficient background of mutual agreement between speaker and hearer (or reader) about the range of appropriate change. Most people would feel that "understanding of a story" should entail more than the ability to answer simple questions about whether a hamburger was eaten. Therefore, Schanks program does not undergo the "appropriate" changes, and does not understand. In general, since AI claims are couched in terms of "doing what a person does", the natural assumption about the range of appropriate changes is that they include the full extent of what one would normally expect in an adult human native speaker of the language. In this sense, it is clear that no existing AI program understands. It is potentially misleading to say that it does, except in specialized technical conversation where the background of expectations is not based on the background of full human abilities.
We can re-examine the counter arguments and Searles responses in terms of our two observations about what it means to say "X understands Y":
Taking the replies in the order in which Searle addresses them in his response:
1. The Systems Reply (Berkeley)
"While it is true that the person who is locked in the room does not understand the story . . . he is merely part of a whole system and the system does understand the story". (Searle ibid., p.419 (p.72 in Boden)).
This argument is based on choosing not to adopt an orientation towards the blindly-rule-following homunculus as an autonomous agent, while attributing autonomy to this whole mysterious arrangement a group to whom one passes stories and questions in Chinese, and that passes back answers after some inscrutable process of cogitation. The "system," though not a conscious individual, is said to understand, in the same sense that one might ask whether a review committee understood a funding proposal. There may be reasons not to want to follow the consequences of granting autonomy to impersonal "systems" whether made up of people or mechanisms (e.g., in attributing responsibility to corporations as distinct to the individuals who run them), but the Systems Reply raises a key issue. In using the word "understand" one presupposes the boundaries of the system that will be treated as autonomous.
There is an interesting reflection of this point in the way we discuss our own minds. I can say "The rational side of me understands, but my feelings are . . . ". In doing so, I reject the common sense orientation of a person as a single autonomous entity, instead viewing the individual as made up of a number of independent "subpersonalities", each acting autonomously.
2) The Robot Reply (Yale)
" . . . a [physically embodied] robot would . . . have genuine understanding". (Searle ibid., p.420 (p.76 in Boden)).
Once again, the reply posits a necessary condition for treating something as autonomous its physical embodiment. It further articulates the question of what it means for the potential for future action to be changed in appropriate ways. One measure of appropriateness is tied to the ways in which the possible actions mimic the corresponding human actions, which we see in the physical world. Most people would be more likely to adopt an orientation of autonomy towards an android than towards a less superficially human-like machine. This does not mean that they would have better philosophical justification for doing so, but that their whole background of perception and action towards the android would be affected by its physical characteristics, regardless of their conscious knowledge that it was artificially constructed.
3) The Brain Simulator Reply (Berkeley and M.I.T.)
"Suppose . . . [the program] simulates the actual sequence of neuron firings at the synapses [in] the brain of a native Chinese speaker when he understands stories in Chinese and gives answers to them. [ . . . ] [S]urely in such a case we would have to say that the machine understood the stories; and if we refuse to say that, wouldnt we also have to deny that native Chinese speakers understood the stories?". (Searle ibid., p.420 (p.77 in Boden)).
Searle indicates that he finds this reply odd, since it depends on a notion of simulation that does not usually find its way into AI arguments. In doing so, he misses the point that it is a response to him (an admitted philosopher), in what the respondents saw as a philosophical, not technical vein. The issue he raised is whether a computer running a symbolic simulation could ever be described as "understanding". They were putting forward a case that was most likely to convince him that a simulation could duplicate in some exact sense the activities of the brain. This is not at all incompatible with other (technical) discussions in which they might argue that other ways of building a translating machine would be more effective.
In responding to their arguments, Searle distinguishes between the "formal properties" of the brain and the "causal properties". Lurking in this distinction is an interesting and dubious form of dualism. Searle would agree that the brain operates according to physical causal principles, and that the computer (or water pipes or whatever he uses as examples) may well operate with analogs or representations of those same principles. He claims to believe that "mental causality" exists in some other domain, whose connections to physical causality appear to be just as mysterious as the mind-body problem has always been. If someone can predict the future physical events in my brain and body based on physical causality, and can also predict my future actions on the basis of intentional causality, there must be some magical connection that guarantees they will be consistent. It seems to be only due to Searles formidable rhetorical skills that this view has been taken seriously in the absence of any coherent explanation of how the two worlds of causality manage to avoid getting in each others way.
4) (What we might call) The Strong Brain Simulator Reply (M.I.T.)
"We can even imagine that the machine operates, not with a single serial program, but with a whole set of programs operating in parallel, in the manner that actual human brains presumably operate when they process natural language. Now surely in such a case we would have to say that the machine understood the stories . . . ? At the level of the synapses, what would or could be different about the program of the computer and the program of the Chinese brain?". (Searle ibid., p.420 (p.77 in Boden)).
Searle states that this is the same mistake as the "Systems Reply". I agree that the parallelism makes no essential difference. But in saying "it doesnt make any sense to make such collective ascriptions to the team" Searle is refusing to acknowledge that people can choose to adopt an orientation towards groups (or teams) as unitary autonomous agents.
5) The Other Minds Reply (Yale)
"[if] a computer can pass the behavioral tests as well as [people] can . . . if you are going to attribute cognition to other people you must in principle also attribute it to computers". (Searle ibid., p.421 (p.80 in Boden)).
Searle discounts this argument by insisting that cognitive science cannot be based purely on behavioral description. "In cognitive sciences one presupposes the reality and knowability of the mental" (ibid., p.422 (p.80 in Boden)). Given his dualistic approach, this implies that the cognitive scientist assumes without debate the reality of the mental for other people (since it is "just plain obvious" that other people have mental states even though you cant experience their states yourself). But for machines, the assumption is that there can be "the computational processes and their output without the cognitive state". Both assumptions are commonsensical, which isnt a very solid reason for accepting them in this kind of debate.
The "other minds" argument can be rejected on quite different grounds. Solipsism is a philosophical choice in some abstract intellectual sense, but not a real choice for anyone who lives in a society. By the very act of entering into serious conversation with other people (even if that conversation is a defence of solipsism) one is adopting an orientation that grants their autonomy as individuals. This is not the case when one interacts with machines (or animals, or even all the people one deals with, as histories of slavery and genocide have demonstrated). There are alternatives for how we understand our relationship to machines, and although an individual does not have an unencumbered choice (being enmeshed in the tradition embodied in a language), the tradition as a whole can move in different directions. It is certainly not an a priori conclusion that we must adopt the same orientation towards people and machines, no matter how well they mimic each other in behavioural tests.
6) The Many Mansions Reply (Berkeley)
". . . eventually we will be able to build devices that have these [mental] causal processes". (Searle ibid., p.422 (p.81 in Boden)).
Searle claims openness to this possibility, commenting that it seems to trivialize the project of artificial intelligence. Based on his earlier comments, I do not understand how he can say "I see no reason in principle why we couldnt give a machine the capacity to understand . . . since in an important sense our bodies and brains are precisely such machines". My interpretation of his response to the "brain simulation" response denies this. If someone were to build an exact replica of a human being (down to the chemical details) it would seem that his earlier objections still apply that the replica operates according to physical causal principles, and that there is no understanding in the molecules or in the neurons or in the constructed brain matter, any more than the water pipes of his example. A claim that this replica is in some essential sense different from a person must be grounded either in a strong form of mind-body dualism or in an argument based on differences of perspective for interpretation.
The argument of this paper as stated so far is in a way self-contradictory. I argue that there is no objective answer to the question as to whether a computer operating analogously to the Chinese room actually "understands". But in bothering to state this argument, there appears to be an implied belief that to be a "coherent question" the question must allow for an objective answer. Certainly, such an assumption flies in the face of the interpretation of meaning as context- and situation-dependent. The underlying question of significance is not whether Searle poses a "coherent" question but whether it is a productive question. The value of questioning is generated in the argumentation that the question provokes, and Searles argument has certainly led to an outpouring of thinking about questions of mechanism, meaning, and understanding. There is however, an important difference in perspective. When the argumentation centers around fine-tuning definitions and concocting clever gedankenexperiments, it can become self indulgent and detached from real concerns. It can instead lead into questions about topics such as the attribution of autonomy, potential for action and change of action, and assumptions about boundaries of autonomous systems. The social constructions that surround us with regard to these questions have profound consequences. They have not been answered (and will never be in an objective sense) and in the continuing questioning, Searles example and his arguments deserve yet many more years yet of discussion.
Department of Computer Science
In the process of revising this paper for the current volume, the author and the editors entered into a dialog in which a number of potential objections were raised. In the spirit of seeing meaning as embedded in a discourse context, it seems more instructive to present them as a dialog, rather than attempting to anticipate the objections in the initial argument.
Q: You complain about Searles claim that cognitive science must presuppose the reality and knowability of the mental. Isnt he right here, that unless cognitive science presupposes at least that, it simply wouldnt have a subject-matter? What else would it be about? Something that claims or hopes to constitute a science does have to presuppose that it has a subject-matter. In order for there to be a science of something, this something does need to exist, and its existence must be presupposed or asserted by the science in question. Could cognitive science (however conceived) do without the assumption that there are cognitive phenomena to be studied and known?
A: In order for a discourse to be "about" something, it does not need to presuppose the reality and knowability (in an objective sense) of the thing it is about. Participants may often jointly suspend disbelief and operate from an "as if" position in which the reality is taken for granted. That is a discourse strategy, not a foundation. That is how we get along in everyday life. We use language that presupposes the existence of the things we talk about, digging only when appropriate into foundational questions. To take an obvious example, we can certainly have a discussion about "God" without thereby presupposing the reality and/or knowability of a deity. It is an assumption of the "Sciences" that they deal only with knowable matters, but that assumption is part of the social discourse, not a necessary condition for acting as scientists.
Q: Mightnt Searle try to take on board your objections to the Chinese Room argument by admitting that there are grey areas in which it is unclear whether to say of someone that they understand, but insisting that all he needs is that there are central and unproblematic uses of such expressions too?
A: This does reflect the fundamental difference between my view of meaning and Searles. He always wants to say "there may be a fuzzy boundary but some cases are just plain clear" and I want to say "everything is contextual, but in many cases the context is so firmly shared and taken for granted among the participants that there is an illusion of objectivity". This is a half-full/half-empty argument that can never be settled. I claim that my perspective explains why there is so much unresolved confusion around his claims about whether computers can understand. If the claims were based on statements that are inherently incoherent (in the sense of not having objective answers) then one would expect exactly the kind of ongoing debate over terms. Searle would argue that the confusion is due to other peoples failure to correctly understand (!) his argument, and that there is really a right answer his.
People who are not comfortable with ambiguity and uncertainty try to fall back on some standard by which one or another set of considerations is the "objective" stance and therefore gives the "real" answer. I see the open-endedness of meaning as the basic condition of language: language is always an incomplete, partial, and ambiguous reflection of experience. Therefore the question "Does the computer understand?" is not answerable with any objective "yes" or "no", but rather depends on the background of assumptions and interpretations (as does the gender example). We have to take perspectives into account if we want to deal with language use in the real world, as opposed to an abstracted logic of definitions. This does not make a predicate like "understand" more difficult to apply than it really is, but allows us to apply it as it really is used in ordinary discourse.
Q: You point out that linguistic distinctions (like man/woman) have borderline cases. But isnt this a different linguistic phenomenon from the one on which your case officially rests, which is context-relativity? After all, Searle could happily concede that man/woman has borderline cases while at the same time insisting that there are (or at least could be) also very clear, indeed paradigm, cases of man and of woman.
A: All cases are borderline. Some are just more visibly borderline than others. "Borderline" means that the choice of whether the linguistic distinction applies is not context-independent, but depends on the context. There are many cases for which the context is so thoroughly shared and taken for granted that the border isnt visible. This is a contingent condition on the congruency of backgrounds of speaker and hearer, not a "paradigmatic" or objective condition.
Q: You say "There are cases where from one perspective we would choose to say that someone does understand something, while from another we would regard the same person as not understanding". Searle might reply "But surely if you were the person in the Chinese Room, you wouldnt have any doubt that, no matter how long you had spent in there, you wouldnt understand (any) Chinese?". So he might ask "Why would one suppose that the judgement that the person in the Room doesnt understand Chinese is one of these "cases where from one perspective . . . "?".
A: This is an accurate imitation of what Searle would say, complete with the emphasized "any". It is typical of his wording when he takes the "how can you possibly doubt common sense?" tack . I would only have doubt if I went beyond my naive standard way of talking about understanding, and considered the philosophical questions seriously.
Q: Your argument may establish (or at least make probable, by analogy) that understand has borderline cases, but not that it is context-relative across the board. There may be "cases where from one perspective we would choose to say that someone does understand something, while from another we would regard the same person as not understanding". But isnt what you need to argue that there are no cases in which it is not true that from one perspective we would say that someone does understand something, while from another we would regard the same person as not understanding?
A: It would be impossible to argue that there are "no cases" without being able to anticipate the context and background of all utterances among serious, intelligent, native speakers of English who might choose to use the word "understand. This would be a Sisyphean task.
Q: How far does your thesis of the relativity of understanding to context and purposes extend? Do you really disagree with Searle that the guy in the Room (considered alone) certainly doesnt understand Chinese?
A: I disagree with the assumption that "certainly doesnt understand" is a meaningful statement out of a specific context of use. The "certainly" is a rhetorical trope, used to intimidate people into not thinking about cases in which what follows might not apply. Searle liberally tosses words such as "clearly," "obviously," "plainly", and "certainly" into his text in order to get this effect.
Q: Is it only from the standpoint of lazy commonsense that this absolute judgment is acceptable?
Q: Are you using a single simple criterion of understanding: observable behavior? Mightnt Searle claim that (a) thats just behaviorism, and (b) its too simple a criterion anyway? Arent you focusing on a different kind of understanding from the kind he is interested in, which seems to be high-level conceptual understanding, rather than just behavioral understanding. Would you apply your relativity thesis equally to high-level understanding?
A: Your question is a great example of my point. You have coined two new distinctions, "behavioural understanding" and high-level understanding". Each of them has its own horizon of meanings, which overlap with, but are not co-extensive with, the word "understanding". Indeed, we could continue proliferating this collection, which can be a valuable method for unpacking the issues that lie behind the use of the simpler term. Of course all of the same things apply when we do this there is no objective boundary for "behavioural understanding" or "high-level understanding" any more than there is for "understanding". Meaning emerges exactly from this kind of dialog, rather than from assuming the existence of precise boundaries and definitions.
Q: Why cant adopting a particular orientation itself be correct or incorrect?
A: Of course, anyone can characterize an orientation as correct or incorrect, and in doing so they are applying their own perspective to the correctness. Not all perspectives are created equal, and each statement is an invitation to argument and grounding on further statements that can be debated. Some statements are subject to coherent grounding within the network of statements within the society (e.g., most of standard science), and therefore are more likely to be agreed upon as correct than statements that contradict or go outside of that discourse. A statement that makes specific observational predictions (e.g., "the sky is blue") is subject to grounding in individual experience. But of course, that is relative to individual physiology as well (e.g., "this chemical tastes sweet" may be individual-dependent).
Q: I agree that the issue of who decides whether entities are autonomous agents, and how, will depend on the concept of autonomy. But, as with all these issues, I think Searle might say there is a certain fixity about that concept, and that, given our current understanding of it, there can be right and wrong answers to (some) questions of the form Is this entity autonomous?.
A: Here he would be drawing on commonsense intuition about "a certain amount of fixity". What that really means is that for many cases, the hard issues of meaning dont need to be faced because they fall in an area of broad consensus.
Q: So here we have the basic disagreement between you and Searle again, over whether language (or meaning) is totally open-ended, or just partially so?
A: It is misleading to put it in "totally" vs. "partially" terms, which makes one position seem extreme and the other by default more reasonable. I might equally well ask whether language is ever "blindly objective" or only "apparently objective within context". Language use is indeed mostly objective, if we have an appropriate understanding of "objective" (i.e, not problematic within the shared context of speaker and hearer). I am making the case that the ultimate grounding of the apparent intersubjective objectivity is never some gods-eye unsituated objectivity, but always a social/situational circumstance. It would seem quite magical to me if the human brain managed to tap into some externally granted objectivity (e.g., the "real" meaning of "autonomous") rather than acting as a nervous system, adapting as best it can with limited experience and situational blindness, producing pragmatically serviceable results.
Q: Several of the passages in the section When is it appropriate to say that someone (or something) understands?, I thought, revealed an interesting agreement between you and Searle, roughly to the effect that although understands is not ambiguous, there are two kinds of things one can do with it (although you may not want to put it like that). But would I be wrong in finding it ironic, in the light of your earlier statements, that you want to insist that to say of a computer that it understands is to apply that predicate "as literally as it ever could be"?
A: Yes, you would be wrong. I have added a phrase after "as literally as it ever could be" to emphasize the point that I am uniformly challenging the notion of "literally".
Q: Would it be right to say that although youre happy with the literal/metaphorical distinction, this doesnt commit you to a distinction between correct (objective) and incorrect application?
A: I am equally happy with a "literal/metaphorical" distinction and with a "correct/incorrect" distinction. Both distinctions are useful in discourse, and neither is grounded in context-independent objectivity.. I appreciate your ongoing attempts to save me from the risk of appearing silly by positing a consistently radical view that appears to fly in the face of common wisdom about language. I really do mean it, though!
Q: Your accusation that Searle must be a dualist wont go unchallenged. Searle has had plenty of things to say about mental causality since the original 1980 paper. His preferred model of mental causation is that of micro-properties causing macro-properties. In Minds, Brains and Science, (Searle 1984) for example, he argues that the relationship between micro-properties and macro-properties serves as a good model for the mind-brain relation, and thus dispels some of the mystery in the mind-body problem. On this view, properties like consciousness are properties of brains (macro-level objects), not of their micro-level constituents (particular neurons, or anything smaller, like molecules). Individual neurons arent themselves conscious. Nevertheless, consciousness is caused by the operations of neurons, the component parts which brains are (largely) made of. I think youre going to have to be careful in accusing Searle of anything other than a dualism of levels of explanation.
A: I agree with this but dont think it deals with "intentional causality". Lets take a less loaded example. "Gridlock" is a macro-property of traffic while "sudden stop" is a micro-property of the automobiles that constitute it. It doesnt make sense to say that gridlock is a property of a car, any more than consciousness is a property of a neuron. Cars participate in creating gridlock. Neurons participate in creating consciousness.
However, I dont believe that there is a kind of "traffic causality" that is independent of the physical causality of cars and the actions of their drivers. If I did a complete (symbolic) simulation of the individual cars, it would exhibit phenomena such as gridlock. Of course no physical cars would be involved, but the phenomenon of gridlock would be modelled without any appeal to some other domain of causality.
Q: I still worry a bit about your accusation that Searle is a dualist. Could you say a bit more about what you mean by dualism here? Youre not just using dualism to mean distinction. But do you mean full-fledged Cartesian substance dualism (the idea that minds are non-physical substances existing in time but not in space)? Searle always stringently denies that he is committed to that.
A: Searle argues that there is "mental causality". When I choose to lift my arm, you can tell the story in terms of mental causality (my intentions) or in terms of physical causality (molecular activity in my brain, muscles, etc.). Either one domain is supervenient on the other (or both on some third domain) or they are independent. If they are independent, they need not agree with each other. But for Searles story to be true, they have to stay synchronized. If at a given moment, my mental causality made my arm go up, and my physical causality made it go down, which one wins? There is a coherent non-dualistic position: the cognitivist position that mentality is supervenient on physicality. I dont understand how Searle can maintain that there are two domains of causality without this, and I have never seen a satisfactory explanation by him.
My basic starting point is that there cant be two different kinds of causality causing events in the same physical sphere, unless one is an epiphenomenon of the other (or they are both epiphenomena of some third domain). Otherwise they could lead in contradictory directions for any given action.
Q: Do you mean to accuse Searle of being a so-called property dualist, i.e. one who believes that minds are just brains, but that brains have some special mental properties that cannot be reduced to physical properties? Thats a more plausible accusation, when leveled at Searle, although he still regularly denies that this is his view.
I worry whether this accusation fits with the point where you ascribe to Searle the belief that mental causality "exists in some other domain". That phrase really does seem to be accusing him of Cartesian dualism.
A: Yes, [property dualism] would be a form of it. The obvious question is how those two sets of properties manage to interact in the generation of behavior. If the non-physical properties are non-effective (have no role in causing things), then his argument becomes uninteresting.
Q: You say that all uses of the term "understand" and its cognates depend on certain concerns and perspectives. Isnt there a problem of self-reflexion here? Musnt it be that your own uses of those terms are right only for certain concerns and from certain perspectives, but wrong for and from others?
A: You correctly point out that if I am to be consistent in being relativistic, then I must apply it to my own statements as well as to the ones I am analyzing. Indeed, my own evaluations of attitudes are matters of social construction that is not the same as individual subjective opinion, but is grounded in the social discourse rather than in an appeal to objective truth.
Searle, J.R. (1980) Minds, Brains, and Programs, Behavioral and Brain Sciences, vol.3, pp.417-24.
(1984) Minds, Brains and Science, (London: BBC Publications).