Archive for the ‘Science’ Category

Everything Is Impossible

Suppose I described a tree to someone who had never seen anything like it. I might say it was a giant creature with countless sprawling arms that never moved, and its body was full of intricate vessels that extracted water from the ground imperceptibly. Most fantastic of all, it had the power to transform light and air into its food, and grew to its great size without taking its bulk from the earth or any other solid thing. Nonetheless, its body was rigid, pound-for-pound stronger than steel, yet lighter than water.

The person being told this might be forgiven for taking this as a tall tale or myth, and the same is true for any natural object. Everything seems impossible or fantastic if we never experienced anything similar. If you saw nothing but cosmic dust, you might never imagine that there could be such a thing as a star. If you saw a barren planet, you might never guess that there could be such a thing as life. If you saw only bacteria, you might never deduce the possibility of more complex organisms. We are able to explain the complex in terms of the simple only after the fact. We have a poor record of determining in advance what is possible or impossible.

If all you knew was physics, you likely would not be able to derive much chemistry. Anything beyond the hydrogen atom is computationally problematic. The few unknown things we have been able to predict are mostly simple structureless entities like fundamental particles and black holes. Everything else comes as a surprise to us. No cosmologist or astronomer anticipated the existence of quasars or pulsars. As with most new things, we first observe them and then try to explain them.

After a long history of discovering things thought to be impossible, if they were ever imagined at all, we should realize how unreliable it is to claim that something is impossible simply because it is outside of our experience. Everything is impossible when abstracted from experience. It is only familiarity that makes these impossible things no longer fantastic.

When Mathematics Fails as Theology

It is fitting that the failed California doomsday prophet should have his formal education in engineering rather than theology, since his contorted interpretation of the Bible relied on a hermeneutic that would make mathematics theologically informative. While it is easy to ridicule his particular belief, the mentality that created it is quite widespread, and can be found even among the most eminent scientists who profess no religious faith. By this mentality I mean the fallacy that mathematics can determine ultimate questions of reality.

Camping’s unwavering certainty in his prediction (“The Bible guarantees it”) was grounded in the appearance of remarkable mathematical coincidences that pointed to May 21, 2011 as a Biblically significant date. Given the premise that the Bible is absolutely true, and the additional premise that his inferences are mathematically certain, we can appreciate why Camping would present his particular interpretation of Scripture to be as authoritative as Scripture itself. Mathematics allows no room for interpretation, so it seems, as the numbers speak for themselves.

This mathematical absolutism disregards the role that subjective choices play in developing a mathematical model. Just because our model accounts for all the data, that does not mean we could not have constructed another model that works equally well. In general, it is impossible to prove theoretical uniqueness. Camping, for example, found it astounding that the same date that was seven thousand years after the Flood was also after the Crucifixion by a number of days equaling the square of the product of three numbers with significance in Hebrew Gematria. He ignored the fact that his dating of the Noachic Flood in 4990 BC was highly idiosyncratic, as well as the more obvious fact that any number of arithmetic operations could have been chosen. Further, why must the end date be determined by the square of the product rather than the cube? In short, he made some deliberate subjective decisions, consciously or unconsciously, which led to the desired result that the Rapture would occur in his lifetime.

Lest we think that such mathematical idolatry is confined to elderly fundamentalist preachers, let us take a look at the opposite end of the spectrum. The famed physicist Stephen Hawking has recently proffered his view that it can be proven – through abstract mathematical theorizing, of course – that heaven does not exist and God is unnecessary. The basis of this claim is his construction of a theoretical model whereby the universe “creates” particles with mass, and the universe is self-enclosed with respect to temporal causality. As with Camping, this model is cleverly constructed to confirm a priori convictions Hawking has held for decades. He already suggested in A Brief History of Time that the need for a beginning of creation might be elimintated by “rounding off” the light cone so there is no causally “first” event. “What need then for a creator?” Such a manipulation was highly tortured, as it would contradict a plain interpretation of general relativity by allowing effectively superphotonic expansion, and generalizes the notion of temporal causality to the point that it is no longer an effective constraint on physical theorizing. Such liberties are part and parcel of the “anything goes” approach to modeling the early universe.

The point is that Hawking had many options available to him, but he did not take the most “obvious” option (in light of relativity’s causality postulate and observed expansion from a single point). Just as Camping wants the Rapture to occur in his lifetime, Hawking wants the universe not to rely upon a transcendent God. He ignores the significant role that his own subjectivity has played in the formation of his mathematical model.

Even if Hawking’s recently proposed theory should someday prove to be an accurate mathematical model of physical reality, it would not accomplish the theological aims he intends for it. The universe does not create massive particles out of nothing, but (theoretically) from a vacuum field or some other construct with definite quantifiable properties. However you want to characterize such an entity, it certainly is not “nothing” in a strict philosophical sense. Modern physicists play fast and loose with philosophical concepts in order to make their mathematical models appear to sanction their metaphysical predilections.

A universe that is self-enclosed with respect to temporal causality does not thereby find itself without need for a creator. To take a simple example, take a universe with one particle that has two states, A and B, where the event of being A causes the event of being B and the event of being B causes the event of being A. (I assume the physicist’s error that events cause events.) In this chicken-and-egg universe, our one particle goes back and forth between being A and B. Does it follow that it needs no creator? Not at all, for there is still no logical necessity that such a universe should exist at all, and we should have to ask ourselves why this particular universe with its causal structure and laws is actually existent, while some other equally mathematically valid universe is not. No natural order is absolutely necessary, in which case we must appeal to some higher cause to account for the natural order as a whole.

Hawking’s physical theories, like all mathematical models of physics, contain determinate assumptions that are not tautological. Since they are not logically necessary, and mathematical principles have no power qua mathematical principles to actualize themselves as physical reality, it follows that we need something beyond physics to account for why this particular natural order was granted reality rather than another. Most physicists overlook the need for metaphysics because they unconsciously ascribe to mathematical principles an almost mystical power to result in physical actualization. This poorly thought out Platonism is rarely formally declared, but is implied in the way physicists speak of their theoretical constructs, particularly when dealing with the early universe or attempts at “theories of everything”.

We might try to make the natural order logically necessary by declaring that every mathematically valid possibility comes into existence. This make nonsense of Occam’s Razor, as it postulates an unfathomable infinity of universes just to account for this one. Further, it does not solve the problem of logical necessity, as it is not logically necessary that every possibility should become actual.

Lastly, one could decide that the natural order needs no cause, and is just a brute fact to be accepted without explanation. This is irrational in the true sense of the word, as it declares everything to be without a reason. It is also profoundly inconsistent to insist that everything that happens within the universe, no matter how insignificant, must have a reason or cause, yet the entire universe with its natural order can come into being and be sustained in being (physicists generally ignore this metaphysical problem) for no reason whatsoever. Logical cogency ultimately requires grounding in a metaphysically necessary Being, and none of our physical theories, by virtue of their mathematical contingency, can meet this requirement.

To the philosophically literate, it is no surprise that mathematics is incapable of serving as natural theology. In our society, however, mathematical ability has become practically synonymous with intelligence, since it is most easily quantified (naturally), and it is positively correlated with other mental abilities. It is a mistake, nonetheless, to make mathematical ability the defining characteristic of human rationality, since computation and spatial reasoning are easily replicated by computers that have no subjective thought processes. Although Professor Hawking and Brother Camping have both done their math correctly, that is no substitute for authentic wisdom and understanding, which requires a more subtle grasp of concepts and an awareness of one’s own subjective assumptions.

See also: Causality and Physical Laws

Parsing Is Not Understanding

The substantial advances in natural language processing made by IBM’s “Watson” supercomputer, while genuinely impressive, have unfortunately given rise to exaggerated claims of the sort that is all too common in computer science. Our tendency to anthropomorphize our creations has led many to uncritically claim that Watson has “intelligence” or is able to understand “meaning” in words. Even less soberly, some are envisioning a day when such “artificial intelligences” will make humans obsolete. These silly claims are grounded in a philosophical sloppiness that fails to distinguish between concepts and their representations, between signal processing and subjective awareness, between parsing and understanding. I have already addressed some of these errors in the eighth chapter of Vitalism and Psychology.

While a little fanciful anthropomorphizing of a computer may seem harmless now, there is a grave danger that we will be led into disastrous social and ethical decisions when computers are able to mimic intelligent behavior more convincingly. As an extreme example, if we were to take seriously the claim that a computer has rendered humans obsolete, we would foolishly replace ourselves with a bunch of unaware machines sending signals to each other, yet having no interior psychological life. Alternatively, we might decide that machine “intelligences” should enjoy rights formerly reserved only to humans.

These absurdities can be avoided if we confront the reality that there is nothing fundamentally different about the behavior of supercomputers like Watson as compared with its simpler predecessors. All these machines do is process signals algorithmically. They have no intensional understanding of meaning. What we call a computer’s “understanding” or “intelligence” is really how it treats a certain signal object. This is strictly determined by its hard wiring and its program (though the latter may include random variables). It is completely unnecessary for the computer to know what it is doing. For example, Watson may distinguish which of the several definitions of the word “bat” is intended by context, but this distinction does not involve actually knowing or seeing a baseball bat or a flying mammal. It is a strictly functionalistic analysis of language, selecting one of several possible attributions based on a probabilistic analysis using syntactic context.

Years ago, I wrote a C program that solves quartic polynomial equations, which was simple enough to run on an IBM 386. This program did not give the computer the power to understand higher mathematics. I simply reduced an intelligible process to an algorithm that a machine could execute without understanding anything about anything. The computer did not know it was doing math any more than a chess program knows it is playing chess. The same is true with respect to Watson and language. It has not the slightest grasp of conceptual meaning. The impressive achievement in its programming is reducing the vast possibilities of natural language parsing to an executable algorithm that has a high degree of accuracy (though not perfect) in its results.

It is certainly not true that Watson understands language the same way humans do, much as Deep Blue did not play chess as humans do. Quite simply, humans do not have the computing ability to explore millions of possibilities in a few seconds, so that is certainly not how we identify the meanings of words in speech in real time. We are able to intuit or directly understand the meanings of words, so we do not have to do much deep analysis to figure out how to interpret ordinary conversation. The great power of rational understanding is that we can get directly at the answer without walking through countless possibilities. This is why I was much more impressed with Kasparov than with Deep Blue, for Kasparov was able to keep the match competitive even though he could not possibly go through millions of possibilities each turn. He had real wisdom and understanding, and could intuitively grasp the most likely successful move on each turn, with a high degree of accuracy.

Some, unwilling to accept a fundamental distinction between computers and authentic rational beings, have sought to demote the latter to the status of computers. They will say, in effect, that what we have said about how computers work is perfectly true, but human beings do not do anything more than this. All we do is process data, and relate inputs to outputs. This position can hardly be characterized as anything but profound willful ignorance. A moment’s careful introspection should suffice to demolish this characterization of human intelligence.

Unfortunately, philosophical naivete is endemic in computer science, which purports to reduce intensional meaning and understanding to its extensional representations. This is linguistically naive as well, for if a signal is an arbitrary sign for a concept, it follows that meaning is not be found in the signal itself. The computer never interprets anything; it only converts one set of signals into another set. It is up to us rational beings to interpret the output as an answer with meaning.

Highly accurate natural language processing is an important step toward establishing credible computerized mimicry of intelligent processes without subjective understanding. Although we can never create genuine intelligence using the current modalities of computer engineering, we might do well enough to create a superficially convincing substitute. In a world that increasingly treats human beings with a functionalistic input-output mentality, such developments could have profound social and ethical implications, which I treat in my new short story, “The Turing Graduate,” to be published soon.

Return top

Weblog

Discussion of current events. Affiliated with Repository of Arcane Knowledge.