In this paper I plan to show that Searle is correct in claiming that his ChineseRoom Analogy shows that any Turing machine simulation of human understanding ofa linguistic phenomenon fails to possess any real understanding. First I willexplain the Chinese Room Analogy and how it is compared to a Turing machine. Iwill then show that the machine can not literally be said to understand. ATuring machine has a infinite number of internal states, but always begins acomputation in the initial state go. Turing machines can be generalized invarious ways.
For example many machines can be connected, or a single machinesmay have more than one reader-printer under command of the control. The machinesare set to accept input and give output based on the type of input given. Whencomparing the Turing machine simulation of understanding to actual humanunderstanding you ca see the story given as input, and the answers to questionsabout the story as output. In the Chinese Room Analogy Searle supposed that hewas locked in a room with a large batch of Chinese writing referred to as”scripts”. By using the term “script” it is meant to saythat this first batch of Chinese writing is the original or principal instrumentor document. Further more in this case he is said not to know any Chinese,either written or spoken. The Chinese writing is described by Searle as “meaningless squiggles”.
Next he is presented with a second batch of Chinesewriting referred to as a “story”. The term story here is meant todescribe the second batch to be an account of incidents or events that will beused to make a statement regarding the facts pertinent to the incidents orevents that will follow. Accompanied with the second batch of writing is a setof written rules written in English that is meant to be used for correlating thetwo batches called a “program”. The “program” given toSearle is meant to used as a printed outline of a particular order to befollowed to correlate the Chinese symbols.
The rules, or the”program”, will allow Searle to correlate the symbols entirely bytheir shape. Finally a third batch of Chinese symbols is presented along withfurther instructions in English, referred to as “questions”. The”questions” are implemented as a way to interrogate Searle in such amanner that his competence in the situation will be given. These”questions” allow the third batch to be correlated with the first twobatches.
It is supposed in this analogy that after a while he becomes so good atfollowing the instructions to manipulate the symbols, while giving the correctanswers, that is becomes impossible for a man from outside the direct point ofview to distinguish his answers from that of a native Chinese speaker. TheChinese Room Analogy goes a step further when he is given large batches ofEnglish, called “stories”, which he of course understands as nativeEnglish speaker. The story in this case is to be used just as it was in theprevious case, to describe the batch as an account of incidents or events thatwill be used to make a statement regarding the facts pertinent to the incidentsor events that will follow. Much like the case with the Chinese writingquestions are asked in English and he is able to answer them, also in English.These answers are indistinguishable from that of other native English speakers,if for no other reason that he is a native speaker himself.
The difference hereis that in the Chinese case, Searle is only producing answers based onmanipulation of the symbols of which have no meaning to him, and in the Englishcase answers are given based on understanding. It is supposed that in theChinese case, Searle behaves as nothing more than a computer, performingoperations on formally specified elements. An advocate of the strong AI(Artificial Intelligence) claim that if a question and answer sequence much likethe case with the Chinese symbols, a machine is not only simulating humanability but also that the machine can be said to literally understand a storyand provide answers to questions about them.
Searle declares that in regard tothe first claim where machine can literally be said to understand a story andprovide answers, that this is untrue. Obviously in the Chinese Room Analogy eventhough the inputs and outputs are indistinguishable from that of native Chinesespeaker Searle did not understand the input he was given or the output that hegave, even if he was giving the correct output for the situation. A computerwould have no more of a true understanding in this analogy than he did. Inregards to the second claim where a machine and its program explain humanability to understand stories and answer questions about them, Searle alsoclaims this to be false. He maintains that sufficient conditions ofunderstanding are not provided by computer, and therefore its programs havenothing more than he did in the Chinese Room analogy. A Strong AI supporterwould contradict this belief by alleging that when Searle read and understoodthe story in English he is doing the exact same thing as when he manipulates theChinese symbols. In both cases he was given an input and gave the correct outputfor the situation.
On the other hand Searle believes that both a Turing machine,as well as the Chinese Room Analogy are missing something that is essential totrue understanding. When he gave the correct string of symbols in the ChineseRoom analogy, he was working like a Turing machine using instructions with outfull understanding. There is syntax through manipulations, but not semantics.Searle possibly could be over simplifying the case by focusing only on part ofthe Turing machine of set to receive input and give output. Some supporters ofstrong AI argued that Searle could be seen as the writing instructions and tapein the Turing machine just as he was the controller in the Chinese Room analogy.Strong AI supporters contend that the controller and reading head in a Turingmachine, as well as Searle as the controller of the Chinese Room analogy, cannotbe said to understand meaning behind the stories. The problem is that thesepieces cannot understand, but the whole could.
This means that the Turingmachine as a whole and the Chinese Room as a whole understood the depth, yetwhat appeared to “control” them did not. Searle never gave a directdefinition of understanding, yet he did declare that categorizing to give outputwhether correct or or incorrect can have understanding as single, loneinstruments. In the second scenario where Searle was given “stories”in English to answer questions, he is obviously able to understand each singlecomponent in the scenario.
With the comparison Searle claimed that his ChineseRoom analogy showed that any Turing machine simulation of human understandingwas incomplete. A complete understanding , much like that he possessed in thescenario containing only English, is only as capable of occurring as the”piece” in control. Searle is correct in claiming that his ChineseRoom Analogy shows that any Turing machine or computational simulation of humanunderstanding of a linguistic phenomenon fails to possess real understandingthat a human is able to comprehend.