‘The Chinese room argument’, as put forth by John Searle, is an argument against Weak AI. I have explained John Searle’s Chinese room argument here so I will not take the time to do so again. Rather, I will take the time to explain the difference between Weak AI and Strong AI. Weak AI is the notion that “Artificial intelligence”, such as computers/robots/etc., will never be more than mere imitations of intelligence. Similarly, this position holds the view that artificial intelligence will never be capable of achieving consciousness — it will only be capable of imitating consciousness. Strong AI, on the other hand, is the view that a suitably complex computer program would really be intelligent, not just an imitation of intelligence. On this view, computers/robots/etc., are all capable of achieving consciousness.
Searle is a proponent of Weak AI. Moreover, Searle’s Chinese room argument is an argument against Strong AI. The Chinese room argument demonstrates that “syntax is not enough for semantics” (Crane, 126). This means that symbols alone do not have any meaning. For instance, the word ‘cat’ does not have any objective meaning beyond the relative meaning human’s have imposed upon it. In a vacuum, ‘cat’ would not refer to a Felis catus. Thus, the argument against Strong AI, as put forth by Searle, states that understanding is not a computational process. Understanding is not a series oh if-then statements (I see the word ‘cat’ —> I know this refers to an exemplification of Felis catus). Searle’s argument can be constructed in standard notion as follows:
(1) Intentionality/semantics is necessary for understanding.
(2) Computer programs do not interpret intentionality/semantics.
(3) Therefore, computers are not capable of understanding.
This argument shows that without any intentionality, symbols are meaningless. Moreover, this argument shows that computer programs are able to communicate, but they do not actually understand what they are saying. This is because intentionality/semantics cannot be derived from syntax alone. If this conclusion is true, and given the astronomical capabilities of computers, it yields major implications for the power that syntax, by itself, possesses.
On pages 125 and 126 in The Mechanical Mind, Tim Crane puts forth a few replies toward Searle’s argument. Crane’s first reply is that Searle’s analogy simply does not work. Crane argues that it does not matter if the man in the Chinese room understands Chinese because the man in the room is only one aspect of the cognition process; he is not the cognition process in its entirety. The entire function of the room is the cognition process.
Searle responds to this by saying he could just memorize all of the rules and recreate the process outside of the room. This shows that the room actually plays no role in the cognition process; rather, this shows that one can still go through the motions, all-the-while being oblivious to the meaning of what he is doing.
The response Crane gives to this is that if the individual who memorized all of the rules regarding the Chinese language went out into the world and interacted with Chinese speakers, then he would eventually come to associate the Chinese symbols with their corresponding objects, and thus, would eventually know Chinese. This is supposed to show that memorization at least provides the foundation for understanding.
Searle responds by saying that this is precisely his point: memorization, by itself, is not sufficient; the man from the Chinese room also needs to interact with the world in order to achieve genuine understanding. On its own, a “running program” (Crane, 126), which is analogous to a human who has memorized the rules of the Chinese language, is not capable of achieving genuine understanding. The program/man also needs to interact with the world in order to achieve genuine understanding.
Faced with no choice but to admit the necessary requirement of interaction for one to possess a genuine understanding of Chinese symbols, it is clear that Searle wins this exchange. According to Crane, Searle’s most important insight is “syntax is not enough for semantics” (Crane, 126). This means that in a vacuum, words have no meaning. It is the people and context in which words are used that give them their meaning. This means words and their definitions are relative.
On a side note, I find the implications of the ‘syntax is not enough for semantics’ (Crane, 126) claim particularly interesting because, if true, and my understanding is correct, Searle’s Chinese room argument suggests that all words were just made up at some point. Moreover, words were just made up and the definitions of these words were simply just agreed upon by the people using them. Searle’s Chinese room argument suggests this is how language was formed. This directly counters Wittgenstein’s private language argument. If Searle’s conclusion is true, how would the person who invented the first words have taught these first words to someone else? I am unsure how language would have started originally. How can someone teach semantics without a common understanding of syntax? Moreover, how can one individual teach another individual, at all, without the existence of language? It seems like the first words would have had to represent their definitions, and thus, this would disprove one of Searle’s premises, and thus Searle’s argument would be false. Given this uncertainty of how the first person in the world to speak taught the second person to speak in the world, I am not sure where it leaves the validity of Searle’s Chinese room argument.
Crane, Tim. “The Mechanical Mind.” Google Books. N.p., n.d. Web. 17 Mar. 2017.