In Minds, Brains, and Programs, John Searle puts forth an argument against the view that understanding is a computational process. Searle uses an example involving a monolingual man locked in a room manipulating Chinese symbols to demonstrate his argument. In this example, Searle is in a room receiving certain Chinese symbols; he matches them with their corresponding symbol and then emits an output with these combined symbols. This output serves as an answer to a question. Thus, Searle is answering a question in Chinese, even though he is not actually speaking, nor does he even speak Chinese. This example demonstrates how an object can go through all the steps and procedures required to carry out a specific task or function, all-the-while having no understanding of the task/function that is being carried out.
According to the Computational Theory of Mind (CTM), a mind is a computing machine. The mind is the result of a series of inputs to the brain. This means that the brain has numerous pre-programmed outputs that will occur given certain inputs. Additionally, CTM holds that at the most basic level of description, thinking is the manipulation of (meaningful) symbols according *only* to their syntactic (“formal”) features, in accordance with certain rules. Different sorts of thinking are decomposed into different sets of sub-processes, each of which follows some specific rule. Since this is exactly what a computer does, then some computers — those which follow the right basic rules — can think. On this view, brains are similar to computers. Both are nothing more than a series of complex ‘if-then’ statements. For instance, if a brain is programmed to say the word ‘cat’ whenever it reads the word ‘gato’, then whenever said brain reads the word ‘gato’, it will say the word ‘cat’. Moreover, whenever said brain receives the input of ‘reading gato’, a series of computations occur: the brain (1) recognizes the word, (2) matches the word ‘gato’ with ‘cat’, and finally (3) emits the output of ‘saying cat’. This crude example shows how the mind works similarly to a computer. Both minds and computers alike are programmed to emit certain outputs given certain inputs.
According to this crude outline of how minds work similarly to computers, one can see how Searle’s Chinese room example is a demonstration of the cognitive process, as well. In Searle’s example, he is (1) receiving certain inputs (Chinese symbols), (2) matching these inputs with their corresponding symbols, and then (3) emitting a specific output (another Chinese symbol) based on the initial input. Searle’s example demonstrates the same process as what the brain is undergoing according to the CTM.
However, as Searle points out, the Chinese room example does not prove that any thinking occurs during the input/output process. Rather, as Searle argues, it is possible to follow the pre-programmed rules, (i.e. matching corresponding Chinese symbols), without having an understanding of what any of the Chinese symbols mean. On this view, Searle is just going through the motions. Searle is answering questions in Chinese via his output of corresponding Chinese symbols devoid of the ability to speak Chinese or understand the story.
In the Chinese room example, the process of matching corresponding Chinese symbols to one another without the ability to read the symbols is analogous to simulating mental events. Thus, the Chinese room example is a demonstration of Weak AI. Weak AI states that machines are not capable of being conscious; rather, at best, machines are only capable of simulating mental events. The Chinese room example shows how a task, such as matching corresponding Chinese characters, can be performed without any understanding of said task/characters, which is a case for Weak AI. The Chinese room example is not a demonstration of Strong AI. This is because strong AI is the view that a machine that carries out the very specific functions of a mind actually is a mind. There is no simulation occurring with Strong AI. Strong AI offers the view that when Searle is matching corresponding Chinese characters with one another, that this is exactly what consciousness consists of. Searle never disproves Strong AI; rather, in light of his Chinese room argument, Searle shows that Strong AI is implausible.
You can find Searle’s full paper here.