The substantial advances in natural language processing made by IBM’s “Watson” supercomputer, while genuinely impressive, have unfortunately given rise to exaggerated claims of the sort that is all too common in computer science. Our tendency to anthropomorphize our creations has led many to uncritically claim that Watson has “intelligence” or is able to understand “meaning” in words. Even less soberly, some are envisioning a day when such “artificial intelligences” will make humans obsolete. These silly claims are grounded in a philosophical sloppiness that fails to distinguish between concepts and their representations, between signal processing and subjective awareness, between parsing and understanding. I have already addressed some of these errors in the eighth chapter of Vitalism and Psychology.
While a little fanciful anthropomorphizing of a computer may seem harmless now, there is a grave danger that we will be led into disastrous social and ethical decisions when computers are able to mimic intelligent behavior more convincingly. As an extreme example, if we were to take seriously the claim that a computer has rendered humans obsolete, we would foolishly replace ourselves with a bunch of unaware machines sending signals to each other, yet having no interior psychological life. Alternatively, we might decide that machine “intelligences” should enjoy rights formerly reserved only to humans.
These absurdities can be avoided if we confront the reality that there is nothing fundamentally different about the behavior of supercomputers like Watson as compared with its simpler predecessors. All these machines do is process signals algorithmically. They have no intensional understanding of meaning. What we call a computer’s “understanding” or “intelligence” is really how it treats a certain signal object. This is strictly determined by its hard wiring and its program (though the latter may include random variables). It is completely unnecessary for the computer to know what it is doing. For example, Watson may distinguish which of the several definitions of the word “bat” is intended by context, but this distinction does not involve actually knowing or seeing a baseball bat or a flying mammal. It is a strictly functionalistic analysis of language, selecting one of several possible attributions based on a probabilistic analysis using syntactic context.
Years ago, I wrote a C program that solves quartic polynomial equations, which was simple enough to run on an IBM 386. This program did not give the computer the power to understand higher mathematics. I simply reduced an intelligible process to an algorithm that a machine could execute without understanding anything about anything. The computer did not know it was doing math any more than a chess program knows it is playing chess. The same is true with respect to Watson and language. It has not the slightest grasp of conceptual meaning. The impressive achievement in its programming is reducing the vast possibilities of natural language parsing to an executable algorithm that has a high degree of accuracy (though not perfect) in its results.
It is certainly not true that Watson understands language the same way humans do, much as Deep Blue did not play chess as humans do. Quite simply, humans do not have the computing ability to explore millions of possibilities in a few seconds, so that is certainly not how we identify the meanings of words in speech in real time. We are able to intuit or directly understand the meanings of words, so we do not have to do much deep analysis to figure out how to interpret ordinary conversation. The great power of rational understanding is that we can get directly at the answer without walking through countless possibilities. This is why I was much more impressed with Kasparov than with Deep Blue, for Kasparov was able to keep the match competitive even though he could not possibly go through millions of possibilities each turn. He had real wisdom and understanding, and could intuitively grasp the most likely successful move on each turn, with a high degree of accuracy.
Some, unwilling to accept a fundamental distinction between computers and authentic rational beings, have sought to demote the latter to the status of computers. They will say, in effect, that what we have said about how computers work is perfectly true, but human beings do not do anything more than this. All we do is process data, and relate inputs to outputs. This position can hardly be characterized as anything but profound willful ignorance. A moment’s careful introspection should suffice to demolish this characterization of human intelligence.
Unfortunately, philosophical naivete is endemic in computer science, which purports to reduce intensional meaning and understanding to its extensional representations. This is linguistically naive as well, for if a signal is an arbitrary sign for a concept, it follows that meaning is not be found in the signal itself. The computer never interprets anything; it only converts one set of signals into another set. It is up to us rational beings to interpret the output as an answer with meaning.
Highly accurate natural language processing is an important step toward establishing credible computerized mimicry of intelligent processes without subjective understanding. Although we can never create genuine intelligence using the current modalities of computer engineering, we might do well enough to create a superficially convincing substitute. In a world that increasingly treats human beings with a functionalistic input-output mentality, such developments could have profound social and ethical implications, which I treat in my new short story, “The Turing Graduate,” to be published soon.