The Turing test developed by Alan Turing, is a method which is used in the field of Artificial Intelligence (A.I.) to determine whether an agent is intelligent in some manner. It does so by asking the agent to perform a series of tasks, cognitive or physical, and if the tester cannot distinguish the difference between the agent performing the task and a person, the agent is deemed intelligent (Turing, 1950). John Searle does not think this is enough to prove intelligence and claims that a machine or agent need only the proper syntax of a system to pass the Turing test, and demonstrates so quite convincingly in his “Chinese room” argument (Searle, 1980). While Searle’s claim that “Syntax is all we need” may be true for narrow A.I., in this paper I will attempt to show that his system fails to properly emulate a generally intelligent agent, and would ultimately fail a Turing test for general intelligence; disproving his argument that the Turing test is not a good enough test to distinguish between real intelligent understanding and emulated understanding.
We expect a generally intelligent person to be able to do three things: be creative, have a memory, and learn by accumulating knowledge. This is true by the observation of one general intelligence we know of, humans. If I started talking to a peer and realized they had no recollection of past events, could not learn anything I taught them and they could not think creatively, I would feel as though they were either purely robotic or incredibly stupid. Now how does Searle claim his syntax based system can emulate these abilities? Searle’s Chinese room thought experiment is setup as follows: Searle himself is placed inside of a room with no windows or doors, only a small slot in the wall large enough to pass notes, Searle does not understand any Chinese dialect as he only speaks English, and in the room with Searle is an incredibly large amount of filing cabinets that contain nearly all possible responses to any Chinese phrase. The experiment begins by someone passing a note with Chinese written on it, Searle looks at the symbols on the note, not understanding at all what they mean or say, finds the appropriate response after searching through his filing cabinets , which to him is just a string of Chinese symbols, and he slides the note back to the other person outside the room. The person outside the room continues responding to Searle and is quickly convinced that whoever is inside this room obviously understands Chinese (Searle, 1980). This is very troublesome as we know Searle knows not the faintest syllable of Chinese. The problem here is that Searle claims this Chinese room emulates a general intelligence, which it does not; the Chinese room is actually only emulating chinese speech, exactly like a narrow chat bot A.I. If we actually test the machine for qualities of a general intelligence, it will fail.
Starting with the trickiest quality, creativity is difficult to qualify but it can described as follows: it is the act of assembling novel knowledge about some system by matching new information input with past information. By receiving new input about the world you can discover truths about it using both induction and deduction. Obviously the Chinese room cannot be creative, but that does not matter, what matters is if the room can emulate creativity at the human level. At first it may appear that it cannot because it has no way to store new input; the room for all intents and purposes is static. So anything we tell it will go ‘in one ear and out the other’. However Searle could point out that in the prodigious amount of filing cabinets, there could be a solution to whatever problem we come across, which would be true for many many problems; however there could not possibly be a filing cabinet for every conceivable problem requiring creativity to solve. This would require an infinite amount of information, and thus the analogy of the room to any possible physical system (Bekenstein, 1981) breaks down. With the limit of finite information, Searle must concede that there would be some problems that the room would not be able to creatively solve.
However Searle again could cleverly point out our folly here, that we cannot force a person to be creative either. We know that persons have the capacity to be creative but at any point we cannot force them to creatively think about an idea or a problem. The designer of the room could easily set up the responses of the room so that any query that seems to remotely require creativity would be met with frustration, confusion and phrases like “I’m working on it”. A human could genuinely give the exact same responses, and thus would not be able to outperform the room. Sure the room would never be able to solve some problems but neither can people, nothing we ask requiring creativity would force us to assume its capacity is less than a persons. So it seems that the room could pass the creativity portion of our general intelligence Turing test.
Moving on to memory, if we have the Chinese room as described above, and after talking to the room for a while we ask, ”What did I just say to you?” it will never be able to answer properly. If it was designed intelligently the creator of the room could design it to answer “I can’t remember” or “Let’s not focus on the past” etc., but we know that a rational, or even just a fully functional person could easily have no issue saying “Oh we were talking about X”. The Chinese room could never consistently do this as it has no storage of previous notes passed. There is no way around this without giving Searle a more complex system with more English rules, which is no longer “Syntax only”, or allowing for dynamic additions to the systems info. Even with the latter addition there is no way for it to give answers more complex than some conjunction of the actual notes passed. There is no good way for the Chinese room to emulate the memory of a general intelligence.
Lastly we consider whether or not the Chinese room can emulate learning like a general intelligence. This is where the biggest breakdown of Searle’s argument is, because the room must have some finite amount of information, and because it has no (or in special cases a very basic) memory, you cannot get a syntax only machine to learn by accumulating knowledge. Suppose the rooms creator filled it with notes about any very specific field (Physics, biology), and suddenly a new breakthrough is made in said field, I could talk to the room about the field and try to inform it of the breakthrough, but I never could. I would assume because of its immense knowledge of the field that it would understand the breakthrough, but because no part of the Syntax based machine understands the note cards, it would never gain the knowledge I told it. A syntax only system even with a basic storage system would have no way to possibly know where to store the new notes in relation to other notes. Another example; if I found out through talking to the room that it only knew math up until basic calculus, I could try everything to teach whatever is in this room the ideas of integration and multi-variable calculus and so on, but when I came back to see what it had learned, it would always be on the same level of knowledge as though I had never attempted to teach it. If a system can never learn something about anything, I would very quickly assume it was not generally intelligent.
Although Searle’s syntax based systems can pass the creativity portion of the general intelligence Turing test, they fail horribly in both the test for memory and for the ability to learn. For this reason we can see that Searle’s claim that the Turing test is not good enough to distinguish real understanding from emulated understanding is simply false.
Turing, Alan M. “Computing Machinery and Intelligence.” Mind 49: 433-460. 49 (1950): 433-60. Web. 17 Oct. 2016. (http://cogprints.org/499/1/turing.HTML)
Bekenstein, Jacob D. “Universal upper bound on the energy-to-entropy ratio for bounded systems.” Physical Review D 23.2 (1981): 5-7. Web. 17 Oct. 2016. (http://www.webcitation.org/5pvt5c96N)
Searle, John R. “Minds, brains, and programs.” Behavioral and Brain Sciences 3 (3) (1980): 417-57. Web. 17 Oct. 2016. (http://cogprints.org/7150/1/10.1.1.83.5248.pdf)