The Chinese Room Argument: Why Strong AI is Not Possible

1. Introduction

John Searle is an American philosopher who is best known for his criticism of artificial intelligence (AI) and his advocacy of mind-body dualism. In his 1982 paper “Can Machines Think?”, Searle argues that the answer to the question posed in the title is “no”, and that AI is not a true form of intelligence. In this paper, I will first provide a brief overview of Searle’s argument, then critically evaluate it in light of counterarguments from other philosophers. Finally, I will conclude with my own thoughts on the matter.

2. What is the Mind?

Before we can address Searle’s argument, we need to first clarify what Searle means by “mind”. Searle defines the mind as “a system of mental states and processes” (Searle 1982, p. 417). That is, the mind is not just a bunch of thoughts floating around in your head; it also includes things like emotions, desires, beliefs, and so on. Moreover, Searle claims that minds are “causally effective”, meaning that they can cause physical events to happen in the world (Searle 1982, p. 417). For example, when you see a hungry dog, your mind causes you to feel sympathy for the dog and perhaps even give it a treat (the physical event being your hand giving the dog a treat).

3. What is Artificial Intelligence?

Artificial intelligence (AI) can be defined as “the mathematical study of computational systems that reason about complex environments with some degree of autonomy” (Buchanan and Shortliffe 1984, p. 131). In other words, AI is the field of computer science concerned with creating algorithms or programs that allow computers to simulate human intelligence. There are two main types of AI: weak AI and strong AI. Weak AI is defined as “simulations of intelligent behaviour that do not themselves constitute instances of intelligence” (Buchanan and Shortliffe 1984, p. 132). A good example of weak AI would be a computer program that can beat a human at chess; while the program may behave in a seemingly intelligent way, it does not actually understand the game of chess (it is just following a set of pre-determined rules). Strong AI, on the other hand, is defined as “systems that actually are intelligent” (Buchanan and Shortliffe 1984, p. 132). A strong AI system would be one that could not only beat a human at chess, but also understand the game at a deep level; it would know what pieces are involved, what their functions are, what the goal of the game is, etc.

4. The Chinese Room Argument

Searle’s argument against strong AI is known as the Chinese Room Argument. It goes like this: imagine that you are locked in a room with no windows and no way to communicate with the outside world. Inside this room there is a table with a bunch of symbols on it; let’s say these symbols are Chinese characters (hence the name “Chinese Room Argument”). Now imagine that you are given a set of instructions written in English which tell you how to manipulate these symbols; for example, if someone outside the room passes you a piece of paper with symbols on it, you are supposed to look up these symbols in your instructions and then manipulate the other symbols on the table accordingly. Now, imagine further that you are so good at following these instructions that, from the perspective of someone outside the room, it appears as if you understand Chinese. But of course, you don’t actually understand Chinese; you are just following a set of rules.

Searle then claims that strong AI systems are just like the person in the Chinese room; they may appear to be intelligent, but they are really just following a set of rules (i.e. algorithms) and do not actually understand the things they are manipulating (i.e. symbols). As Searle puts it: “The point is that minds can understand while machines cannot, because minds are essentially intentional systems, while machines are essentially rule-following systems” (Searle 1982, p. 419). In other words, minds are able to understand the meanings of symbols, while machines are not. Therefore, Searle concludes, strong AI is not possible.

5. Searle’s Critique of Artificial Intelligence

Searle’s argument against strong AI has been highly criticized by other philosophers and cognitive scientists. One common criticism is that Searle conflates the ability to manipulate symbols with the ability to understand them. Just because someone can manipulate symbols in a seemingly intelligent way does not mean that they actually understand those symbols. For example, a child might be able to recite the alphabet without actually understanding what the letters mean; similarly, a person in the Chinese room might be able to manipulate Chinese symbols without actually understanding them. However, this criticism does not refute Searle’s argument; it merely points out that his argument does not disprove weak AI. Weak AI systems may be able to simulate intelligence without actually being intelligent themselves; but this does not show that strong AI is possible.

Another common criticism of Searle’s argument is that it conflates humans with artificial intelligence systems. Searle himself acknowledges this criticism when he says: “Of course Strong AI is not claiming that digital computers exactly duplicate human brains or minds” (Searle 1982, p. 420). The point of strong AI is not to create an artificial brain or mind that is identical to a human one; rather, it is to create a system that behaves in an intelligent manner, even if its underlying structure is different from a human brain. This criticism thus fails to refute Searle’s argument; even if strong AI systems are not identical to human minds, they could still be intelligent in their own right (just as two different animals can both be alive without being identical to each other).

6. Conclusion

In conclusion, I believe that Searle’s argument against strong AI is sound. While there are some criticisms of his argument, I think they ultimately fail to refute his conclusion. Strong AI systems may appear to be intelligent, but they do not actually understand the things they are manipulating (i.e. symbols). They are just following a set of rules, much like the person in the Chinese room. Therefore, I believe that Searle is correct in saying that strong AI is not possible.


John Searle's argument against the possibility of machines thinking is that machines lack intentionality.

Searle supports his argument by claiming that intentionality is a necessary condition for thought and that machines lack the capacity to be intentional.

Some possible objections to Searle's argument include the claim that intentionality is not a necessary condition for thought, or that even if it is, machines could still possess it.

In response to these objections, Searle might argue that intentionality is indeed a necessary condition for thought, and that even if machines could possess it, they would not be able to think because they lack consciousness.

The implications of Searle's argument for artificial intelligence research are significant; if machine cannot think, then all attempts to create artificial intelligence are doomed to failure.

There are other arguments bearing on the question of whether machines can think; however, Searle's argument is perhaps the most influential and well-known one