Home Part I Part III

Part II

Consciousness and Its Unity
Rational Will and Its Freedom
Thoughts and their Representations
Pseudo-Intelligence in Computer Science

Consciousness and Its Unity

In neuroscience, 'consciousness' is a broad term applied to any subjective awareness of mental events, which can range from abstract intellection to raw sensation. Considered experientially, consciousness is an integrative phenomenon, taking a synthesis of various sensory experiences and associations, and referring it to a unified subject. We have contended that animal consciousness, and that of humans in particular, is a real unity, not merely an amalgam of cellular components. We do not experience consciousness as millions of micro-events, but as a cohesive unity, so there is no justification for reducing consciousness to an ensemble of neural events. Rather, consciousness is the subject that experiences a synthesis of sensations processed by neural systems. We note that the term 'consciousness' may be referred to the subject itself, or to the experience of subjectivity. Some may contend that there is no real distinction between the two; that is, we are what we experience.

A real test of the unity of consciousness can be found in split-brain phenomenon, resulting from certain kinds of brain surgery that require the severing of connections between the left and right hemispheres of the brain. The two hemispheres are known to control certain mutually dependent functions, most notably, for our purposes, speech and language comprehension on the left, and spatial recognition on the right, as depicted in the diagram. What happens when the two halves of the brain can no longer communicate with each other?

As is well known, each hemisphere of the brain controls the motor functions of the opposite side of the body and receives visual input from the opposite side. For example, the right hemisphere controls the left arm, hand and fingers. Although interlateral connections permit the left hemisphere to also exert motor control over the left arm, there are no connections whatsoever enabling it to control the left hand and fingers, for functions such as grasping or writing. Further, the right brain receives input only from the right hemiretina of each eye, which corresponds to the left field of vision, while the left brain receives sensation from the left hemiretinas, or right field of vision. Normally, this division of labor has no notable behavioral effects, since both hemispheres communicate with each other through the corpus callosum. However, when the hemispheres are surgically severed, it becomes possible to exploit this bifurcation in motor and visual function (as well as similar divisions in sound, smell and touch) in order to isolate inputs to each hemisphere.

Such controlled observations were made in the famous "split-brain" experiments by Sperry, Gazzaniga and Vogel, which tested the cognitive functionality of the left and right hemispheres in patients that underwent surgical separation of the hemispheres. It should be noted that in such patients, the hemispheres are still connected at the brain stem, and in the lower portion of the brain, so there is a common limbic system. This would permit some communication between hemispheres. Nonetheless, the researchers observed a very real schism in certain psychological functions. Would these prove to be tantamount to a division in consciousness?

We should note that when neuroscientists speak of 'cognition' or 'cognitive function,' they generally mean any higher-order associations beyond raw sensation. They do not use the term 'cognition' in a restrictive sense applied to rational cognition, that is, knowledge of concepts, as understood by classical philosophers. Scientists tend to regard all higher-order manipulations of sensation as degrees of 'cognition' or 'cognitive function,' even when such processes are unconscious or involve no conceptual knowledge. With such terminology, it is unsurprising that they view cognition as a continuum among the various species of animal, and fail to see any sharp distinction between the cognitive abilities of humans and those of lower animals. This perception is attributable to the vagueness of their concept of cognition, as well as their effective downgrading of human intellect by pretending it is reducible to the operations of the sensitive soul, a position I have critiqued in this essay and elsewhere. (See, e.g., Logic and Language)

In their ordinary lives, the split-brain patients exhibited no apparent loss of cognitive or motor function after their surgery. They were able to read, write, speak, perform athletics and engage in complex tasks in the workplace and in society without any apparent deficiency. It was only when they were subjected to controlled experiments that isolated sensory inputs in their left and right fields of vision that the psychological consequences of their split brain became apparent.

Patients were only able to verbally repeat words that were flashed to their right field of vision, not their left. Similarly, they could only write the recognized words with their right hand, not their left. Since the right side of the body and right field of vision are connected to the left hemisphere of the brain, these results suggested that only the left brain has verbal consciousness. This finding is consistent with the already established locations of the speech center (Broca's area) and language comprehension center (Wernicke's area). Since human consciousness or subjectivity is experienced verbally in adults, only the left hemisphere's activity could be characterized as what we would ordinarily call 'conscious,' while the severed right hemisphere's activity was experienced as 'unconscious' from the perspective of the verbalizing conscious patient.

If this model of the hemispheres' relation to consciousness were also applicable to normal humans, it perhaps might account for the fact that most people feel a greater affinity and facility with the right side of their body, since this side is most intimately connected with the seat of verbal consciousness. This may explain not only why most people are right-handed, but also the concomitant cultural phenomenon of regarding the left side as a relatively foreign entity, perceived as evil (Latin: sinister), unlucky or clumsy. Conversely, those people who have the most extensive connections with their right brain will be less likely to perceive it as a foreign entity. This may account for why people who have high spatial reasoning abilities, and therefore have great facility and familiarity with their right brain, are also more likely to be left-handed. However, left-handed people are not significantly more likely to exhibit overall right hemisphere dominance.

The speech and language comprehension centers are necessary for manipulating syntax and linguistic symbols that, for humans, can represent semantic objects or verba mentalia. It is only these semantic objects, which refer to the essential realities abstracted from determinations, that constitute a truly conceptual language. When neuroscientists speak of 'language' in animals, they refer to gestures and vocalizations that refer to some object. They make no distinction between animal 'language', whose symbols refer to sense objects, and conceptual human language, which deals with verba mentalia. Thus, to them, the syntactic operations of the speech and language comprehension centers suffice to account for human language, when in fact they only account for the manipulation of symbols, not for the comprehension of universal concepts. (This distinction is elaborated in other works, such as Logic and Language.)

The proximity of the so-called 'speech center' (Broca's area) and the language comprehension region (Wernicke's area) to the auditory cortex suggests that these structures originally arose in the context of auditory language. Nevertheless, the operations of these structures are by no means limited to vocal signals, as evidenced by the fact that these same regions are employed in learning and using modern sign languages. This suggests that these regions do not operate on sensations as such, but on a semantic object abstracted from a determinate sensation (e.g., the exact sound of a particular utterance or the exact shape of a particular gesture). Yet a semantic object can be of two kinds: it can be merely a symbol whose referent is a sense-object - some sensible thing, act, or event in physical or mental space - or it can also have a meaning that does not correspond to any object of sense or its image. As far as we know, this latter type of semantics is unique to humans. This higher order of language necessarily requires a broad range of symbols and grammatical syntax in order to give itself expression, yet the complexity of language alone does not bring us to the conceptual order. We have already commented that dolphin's have highly complex linguistic symbols, but this is only because of the complexity of the sense-objects they represent. We may further note that an animal's language can have a mental sense-object, such as a feeling or memory, as a referent. This 'thinking about thinking,' or metacognition as it is sometimes called, does not bring us to the conceptual order, so long as the thoughts ultimately have sense-objects as their referents.

In humans, the syntactic manipulations of semantic objects by the speech center are eminently necessary, though not sufficient, conditions for the operations of the rational intellect. Even to a non-materialist, this might suffice to call the speech center a 'seat of intellect' in an improper sense, namely that the operations of that region are most immediately necessary to the operations of the intellect. In the event that there should be a division or lesion in the brain, we might expect that rational consciousness should only have access to the sensitive faculties whose brain regions are connected to the speech center.

In most split-brain patients, as in most humans, the speech center is found on the left side in a region called Broca's area. People with lesions in this area find it difficult to use language, and in some extreme cases, cannot speak more than a few words. They may still understand language with the help of Wernicke's area toward the rear left, and they give evidence of distinctively human rational consciousness, but that consciousness now has much less to work with, making it effectively limited in its operations. As with all brain functions, the operations of the speech center can be re-routed elsewhere in the event of gradual destruction of Broca's area, by a tumor, for example. This extraordinary plasticity of the brain can occasionally allow part of the speech center to appear in the right hemisphere, as we shall see in one famous case.

For the vast majority of patients in the split-brain experiments, researchers established that the sensory awareness of verbal consciousness was confined to the left hemisphere, and the verbally conscious subject had no knowledge of what the right hemisphere was doing with the sensations it received. For example, a patient was asked to reach into a bag and feel its contents in order to identify the object within. When the subject used his right hand, he was able to speak the name of the object, but when he used his left hand, he was unable to do so. Ordinarily, the subject would rely on sight to transmit sensory information to both hemispheres of the brain, but in this controlled experiment, it was confined to one or the other, due to the lack of interlateral connections to the hands and fingers. When the left hand felt an object, the sensory input went only to the right hemisphere, which apparently was unable to relay this information to the speech center on the left.

In another experiment, the split-brain patients were shown distinct images in their left and right fields of vision, such as pictures of a spoon on the left and a baseball on the right. When the subject was asked to say what he saw, he only said "baseball." This result is not too surprising, since the verbally conscious left brain would only be aware of the right field of vision. However, when the same subject was asked to point to an object similar to the image he saw, he grabbed the spoon! While it is expected that the right hemisphere, which controls spatial recognition, should see only the image on the left, it is remarkable that it was able to instruct the left hand to carry out a fairly sophisticated task. Was the right hemisphere able to understand the command independently of the left, or is there some communication between hemispheres going on? After all, supposedly only the left hemisphere is able to understand language, so there would be no way for the right brain to know that it was expected to grab any object at all.

The behavior of the right hemisphere, as judged from this and other experiments, seems to be consistent with what we would call appetition in animals, using classical terminology of faculties. In the experiment just described, there was some apparent visual-motor interaction between the right hemisphere and the left hand, where the sight of an image prompts an associated action, unthinkingly, so to speak, as when we transcribe words or perform other tasks that are so easy and routine that we can do them practically unconsciously. It was able to recognize the command, not because it had verbal consciousness, but because it could recognize the sounds of simple command words, such as 'point,' from past experience, and have an automatic response ready. This would not mean that the right hemisphere is conscious, in the ordinary sense of subjective awareness, but that the unconscious mind is much more sophisticated than we commonly realize. Freud had long ago given proper emphasis to the impressive powers of the unconscious mind, even if many of his particular hypotheses have been since abandoned. The split-brain phenomena may give us some insight into the distinction between conscious and unconscious processes.

The right hemisphere is frequently ascribed 'cognitive' functions, but we must recall that the term 'cognitive' in neuroscience is quite broad, encompassing everything from true ratiocination down to mere sensory associations. Nonetheless, the independent capabilities of the right hemisphere are fairly impressive. In one experiment, a split-brain subject was asked to draw a picture corresponding to a compound of words he was shown. Images of the words 'fire' and 'arm' were flashed in the left field of vision, and the subject's left hand drew a picture of a rifle. This and similar results seemed to show that the "unconscious" right brain was somehow capable of understanding language.

Closer inspection proved that the right brain could not process language as freely as left. It could match words to pictures, spell and rhyme, and even categorize objects. It was good at recognizing words, and had an extensive vocabulary, but it had no significant syntactic or grammatical ability. These findings are consistent with our hypothesis that the right brain merely recognizes key command words, prompting an automated response based on past experience or association. This sort of ability is comparable to that of dogs and other animals trained to obey verbal commands without any conceptual understanding. Years later, it became evident that even the modest right-brained linguistic abilities of the split-brain patients were highly atypical, as the right hemisphere in most people can not process even the most rudimentary language.

The rare cases of primitive linguistic ability in the right brain are nonetheless instructive, since they give us a precious opportunity to ask questions of the unconscious directly. One of the original split-brain experiment subjects, J.W., developed an ability to speak from his right hemisphere thirteen years after surgery. In another anomalous case, discovered by Kathleen Baynes of the University of California in Davis, a subject spoke only with her left brain, but could write only with her right brain. This suggests that writing (and possibly similar activities such as typing) is different from what we do when understanding and using language, whether spoken or gestural. Indeed, those who are proficient in writing find they do it unthinkingly. In other words, we consciously think in terms of spoken words in our mind, but some learned automated spatial-motor process helps us translate these words into their written form.

One remarkable split-brained patient, Paul S., happened to be one of the rare individuals with some speech ability in his right hemisphere even before his surgery, so the researchers were afforded a unique opportunity to receive verbal responses from the right brain. Some of the more notable distinctions are described at: http://www.macalester.edu/psychology/whathap/ubnrp/Split_Brain/Behavior.html

Paul's right hemisphere stated that he wanted to be an automobile racer while his left hemisphere wanted to be a draftsman. Both hemispheres were asked to write whether they liked or disliked a series of items. The study was performed during the Watergate scandal, and one of the items was Richard Nixon. Paul's right hemisphere expressed "dislike," while his left expressed "like." Most split-brain patients would not be able to express the opinions of their right hemispheres as Paul S. did, but this gives us insight on the hidden differences between the hemispheres.

Paul S. answered questions by writing with his left hand or his right, which the researchers interpreted as reflecting the expression of his right brain and left brain, respectively. Yet we have seen from the UC Davis case (which was observed many years later) that the ability to write can be independent of speech. We note that the right-brained answers tend to be more uninhibited. The fully conscious left brain wants to be a draftsman, yet the right brain harbors a whimsical desire to be a race car driver. The conscious mind supports Nixon as only a staunch political ideologue could at that time, while the unconscious feels a visceral aversion to the President. The distinctions between hemispheres are not in the intellectual realm, but in desires and preferences; in a word: appetition.

We have noted that appetition is a faculty that can be found in a non-rational sensitive soul, which can be unconscious (appetitus naturalis) or conscious (appetitus elicitus). There can also be a truly rational appetite or will, which deals in the conceptual order. The appetites shown by the right brain of Paul S. were all of likes and dislikes, which belong to the sensitive appetitus elicitus, more specifically, the concupiscilius. The objects of these appetites are purely sensitive, as they consider whether a certain thing is pleasant or unpleasant, attractive or unattractive.

Although the split-brain surgery prevent certain processes of the sensitive soul from communicating with each other, both halves appeared to be able to relay their sensations to a conscious subject. Both the left and right brain answered the question, "Who are you?" with "Paul". When asked "Where are you?", both replied "Vermont." Can we say then, that there were two conscious subjects? Were both consciousnesses on the rational order?

I suggest that the two "consciousnesses", both of which identified as "Paul," were two manifestations of the same conscious subject. This is not as bizarre as it sounds; in fact, we all experience multiple manifestations of ourselves. I am myself in my dreams, yet as another manifestation of myself, who is often ignorant of my waking self, just as my waking self is ignorant of dreams that took place at night. The proof that the dreaming self and the waking self are one and the same is when I wake up in the middle of a dream and experience complete continuity of self. I was the same person when dreaming, but in an altered state with respect to what was processed in my sensitive faculties. It was the same 'me' operating with a completely different set of assumptions about what is real, being informed by imagination rather than external senses.

As far as Paul S. ever knew or claimed, he was one person, not two. We cannot know for sure unless we could reunite the hemispheres and see if there was an experience of continuity. Both sides knew he was Paul, which may suggest continuity of intellect, or that certain memories were common to both hemispheres. Memory, we recall, pertains to the sensitive soul, not intellect as such, since it makes use of phantasms of sense-perceptions. Therefore, continuity of intellect need not imply continuity of memory, and in fact we regularly experience discontinuity of memory between our dreaming and waking states. Thus there is nothing in the special case of Paul S. that is inconsistent with the supposition of a single subject existing in different manifestations, each of which is exposed to different sensitive processes.

In a materialist interpretation, such as that proposed by Gazzaniga himself, the 'consciousness' of Paul S. was literally cut in two by a surgeon's blade. Even if we were to adopt the materialist premise that consciousness is nothing more than neural signals, it is hardly probable that such an elaborate aggregate phenomenon could survive an arbitrary incision. This would be like cleaving a computer in two, or excising half the code from a software program, and still expecting there to be at least basic functionality.

If nothing else, the split-brain experiments would seem to suggest at least that the sensitive faculties can be divided from each other. Does this mean that subjective consciousness of a purely sensitive soul, such as exists in non-human animals, might be capable of division? We know that flatworms can have their heads split laterally, and each half will regenerate a corresponding half, resulting in two heads, but while flatworms have a sensitive soul, it is unlikely that they experience subjective consciousness. Among the higher animals, such as canines, primates, and cetaceans, which certainly have subjective consciousness, there is not nearly the same degree of lateral specificity as in humans, making a division of their consciousness impracticable. Experiments on split-brain chimps reveal that much more communication survives between hemispheres (e.g., visual information through the anterior commissure) than in humans (who do not use the commissure for this purpose), again revealing the perils of making inferences from homology. Given our weak understanding of animal subjectivity even in normal states, it is unsurprising that we should lack conclusive evidence as to whether it can be split.

Apart from the peculiar case of Paul S., the other split-brain patients all showed a single manifestation of rational subjectivity. However, their right-brained sensitive appetitions were able to act in opposition to conscious desires.

One patient found his left hand struggling against his right hand when trying to pull up his pants in the morning. While the right hand tried to pull them up, the left was trying to pull them down. On another occasion, he was angry with his wife and attacked her with his left hand while simultaneously trying to protect her with his right!

Here the conscious self is at odds with the sensitive appetites of the right brain. Note that in the second incident, it is the rational self who tries to protect his wife, even though he is angry, against the unconscious impulse of the left hand. The rational soul is able to override or suppress the suggestions of appetite and emotion. Recall that the limbic system is shared by both hemispheres, so both sides of the brain manifested the emotion of anger, yet only the conscious self was able to exercise restraint, while the unconscious right hemisphere acted on impulse. This is not too strange, considering that we commonly experience a tension between our conscious desires and our sensitive appetites and emotions. The only difference is that ordinarily we do not see this conflict manifested bodily, where one hand struggles against the other. This is because, under normal circumstances, both hemispheres interact so a common course of action is decided upon. In the split-brain patients, the right hemisphere is "on its own," without any rational feedback, so it acts as would a lesser animal.

The existence of a rational intellect in humans does not imply that all sensitive and vegetative functions must be directly subjected to it. We often cannot control our emotions any more than we can consciously regulate our heartbeat. The novelty of the split-brain patients is that fewer sensitive operations than usual are subject to the government of rational consciousness. As the sensitive faculties are indeed mediated by neural processes, any disruption in these processes can create a schism among the faculties. Only those faculties linked to a verbalizing consciousness can be governed by rationality, since human intellect depends on linguistic objects to give access to concepts.

One particular non-rational faculty manifested most strongly in the right hemisphere, namely imagination, is especially useful to the intellect. As the name suggests, imagination deals with visual images (as confirmed by the linkage of imaginative activity to the visual cortex), manipulating them in creative, non-linear ways that do not necessarily follow any logical or syntactic structure. It is this very absence of rhyme or reason that makes imagination a potent creative force, suggesting things to the intellect that would not have occurred to it had it plodded along through rigorous ratiocination. We can see how impoverished our intellect becomes by examining the effect of hemispheric separation on dreams. The psychoanalyst Klaus Hoppe studied twelve split-brain patients. He found:

The content of the dreams reflected reality, affect, and drives. Even in the more elaborate dreams, there was a remarkable lack of distortion of latent dream thoughts. The findings show that the left hemisphere alone is able to produce dreams... Patients after commisurotomy reveal a paucity of dreams, fantasies, and symbols. Their dreams lack the characteristics of dream work; their fantasies are unimaginative, utilitarian, and tied to reality; their symbolization is concretistic, discursive, and rigid.

Without linkage to a well-developed faculty of imagination, the verbally conscious left brain was left with a barren dream life, pedantically literal and physicalist, lacking the spontaneity and non-linearity that can lead to meaningful insights. While imagination is, strictly speaking, non-rational, reason is poorer without it.

Conversely, the sensitive faculties, when separated from the intellect, are diminished in their abilities. The left hemisphere is able to solve mathematical problems even without the vast computational power of the right brain, while the right brain, or "math" and "spatial" side, is powerless to solve such problems on its own. This suggests that our modern notion of "intelligence" being equivalent to computational ability is entirely wrongheaded. It is only when the sensitive faculties are suffused with rational consciousness that they become engaged with a knowing intellect.

We have observed that a single rational consciousness can have multiple manifestations, according to the configuration of sensitive faculties to which it is linked. Apart from the extreme split-brain scenarios, most people experience alternate versions of themselves in dreams, or under the influence of certain medications or drugs. Less common in modern society, though fairly regular in more traditional cultures, are religiously altered states, such as trances, visions, and other mystical experiences. What all of these diverse experiences have in common is that their subject may truly say of them, "I was myself, yet I was not myself." In other words, we recognize the same subjective "I" in these other experiences, yet much of our sensitive personality, namely our appetites and perceptions, are quite different from our normal waking state.

In the case of drug-induced hallucinations, some chemical substance causes the imaginative faculty to generate bizarre images, which are presented to the rational consciousness. Since these images often have the same vivid quality as direct sensation, the intellect reacts as though it were immersed in a bizarre reality, since, as far as it knows, the images presented to it represent real sensations. A similar phenomenon can be induced by electrical stimulation of the proper areas of the brain. To those who appreciate the distinction between sensitive and intellectual faculties, there is no difficulty admitting that sensitive phenomena can be induced by physical stimuli. Those who argue that such phenomena constitute "scientific proof" of psychological materialism fail to distinguish between the sensitive and intellectual psychological faculties, and assume that the only alternative to materialism is a strong Cartesianism.

An intriguing phenomenon observed among split-brain patients was a tendency by the subjects to "rationalize" their disconnected experiences. In one experiment, the command "laugh" was flashed to the woman's right brain, and she obligingly laughed. She was perhaps able to do this due to the unity of emotional experience even in split-brain patients. Her verbal consciousness, however, had no way of knowing why she laughed. When asked, she replied, "Oh, you guys are really something." This was clearly an artificial rationalization, but the subject apparently was confident this was the real reason for her laughter. In another instance, she seemed to be aware that she was rationalizing. When the command "rub" was flashed to the right brain, she started to scratch her arm. When asked what the command was, she looked at her hands and said, "Oh... itch." It was as if she realized, "Oh, I'm scratching myself, therefore the command must have been 'itch.'"

In another experiment, a man's left brain (right field of vision)was shown a chicken claw, while his right brain (left field) was shown a snow scene. The man was then asked to point to a picture of an object that was related to the image he had been shown. He pointed to a picture of a chicken with his right hand (left brain) and a shovel with his left hand (right brain). When asked to explain his choice of the shovel, he said he picked it because "you have to clean out the chicken shed with a shovel." The real reason for his choice, of course, was the snow scene presented to his right brain, unbeknownst to his verbal consciousness. Since the man was evidently convinced of the veracity of his explanation, this "rationalizing" must have taken place at an unconscious level, and what was at best a guess was presented to the consciousness as fact. Apparently, we cannot be too confident in our subjective interpretations of why we perform certain acts. Could free will, then, be an illusion? We will examine this possibility more carefully in a later section.

Intriguingly, although such rationalizations undermine our confidence in the power of the conscious mind over the unconscious, there nonetheless is a real freedom that characterizes these artificial interpretations. It turns out that false memories are only produced in the left hemisphere, in the prefrontal lobe, an area that is considered to be a seat of executive function. The consciousness of the left brain will sometimes confidently claim to have seen things that it had not actually seen. The right brain, by contrast, never embellishes its memories. It is not that the consciousness manifested in the left brain is a liar or a deceiver, but rather it has a need to interpret experiences. Why can't it just say, "I don't know"? I submit that our very mode of consciousness is interpretive.

This interpretative operation appears to be a distinctively human characteristic. George Wolford of Dartmouth explored this propensity in experiments where a light would flash randomly on the top or bottom of the computer screen, appearing on top 80% of the time. Human subjects would try to figure out the sequence, not realizing it was random, and ended up guessing the correct location of the next flash only 68% of the time. They would have done better had they just always chosen the top. Rats and other animals, by contrast, learned to maximize their success rate by simply pressing the top button always, rather than trying to figure out any pattern or meaning. Much like the human right brain, animals do not try to interpret experience or figure out what events mean.

The Dartmouth experiments were designed to trip up our interpretive nature, to see if we would interpret even if it were disadvantageous to do so. The fact that human subjects continued to interpret phenomena no matter what, much as the split-brain patients seemed compelled to make sense of their unconscious behaviors, suggests that such interpretation is essential to human rational consciousness. In some cases, this interpretive faculty can obscure external physical reality, as when the left brain performs more poorly than the right at identifying optical illusions. The right brain is ordinarily "conscious" only in the sense of being "aware" of sensory data, but it is only the left brain that manifests the interpretive operations that are a characteristic of rational human consciousness. In order to interpret, we must be capable of comprehending meaning, so only a rational soul is capable of truly interpretive acts.

If our interpretive faculty is indeed biologically useful, we should expect it to be right much more often than it is wrong. Yet it has become the fashion among some scientists to disparage the interpretive faculty as a mythmaker, and use it to explain away as fiction those aspects of human experience that do not comport with their materialist worldview. Part of the reason for the popularity of this attitude among scientists is the conceit of objectivity that has been entertained since the nineteenth century. In this view of science, a researcher merely observes the data (that which is "given") and does not impose his own subjective interpretation on them. Instead, he tries to understand reality as it really is, which is intrinsically purposeless and meaningless. This ideal of scientific objectivity can only be seriously entertained by the philosophically naive, and it is no accident that it arose around the same time that rigorous metaphysical philosophy began to disappear from university curricula. Serious philosophers understand that all human observations are theory-laden. We impose ontological categories on our sensory observations, identifying splotches of color as corresponding to various physical objects and properties. The whole notion of "understanding" anything necessarily entails conceptualization, which requires us to grasp the meanings of ideas. If the "objective" world were truly the only reality, than it would follow that all human intellectual activity, including that of science, is just so much useful fiction. If we were to take this strictly instrumentalist approach to science, consistency would require scientists to relinquish their claim to be able to teach us anything about reality. Once that is admitted, science cannot prove that "objective" reality is the only reality, or anything else about reality.

In practice, materialist scientists only selectively apply skepticism toward human subjectivity, when it yields data they wish to disregard. The diagnosis of schizophrenia or hallucination depends on whether the scientist feels that a reported imagining is "healthy" or "unhealthy," based on his culturally defined limits of acceptability. Uses of the imagination deemed "unhealthy" will be treated with antipsychotic drugs, which suppress sensitive activities, often making the patient feel listless and apathetic, sometimes even in a stupor. The person is now "healthy" because he does not experience anything that the doctor or his culture find unsettling. This boundary between healthy and unhealthy can become especially controversial when dealing with religious or paranormal experience. The tendency among materialists is to pathologize as "hallucinatory" any experience that does not correspond to an external physical object. Yet if we were to take this physicalist definition of reality seriously, we should also have to pathologize the composition of poetry and music, as well as the abstract speculations of theoretical physics, mathematics, and philosophy.

Of course, materialism itself is a particular interpretation of reality, which is sometimes imposed by psychiatrists on their patients through antipsychotic drugs. These medications tend to suppress any imaginative impulses, which might be helpful if the person is oppressed by his hallucinations, but can also have the effect of removing the poetry of life, making everything seem dull and dreary. In such cases, materialist doctors have imposed on the patient a reality as bleak and limited as that in which they choose to believe. We here strike upon a delicate issue, as doctors must decide whether or not to treat a patient based on whether they consider an experience deleterious, which is often a culturally subjective assessment.

The imaginative, almost dream-like constructions of religious insights has proved to be an anthropologically universal phenomenon, seen in diverse cultural contexts throughout history. The determinate cultural conditions of a group of people helps provide the symbolic imagery that forms the content of religious myth, but it would be as much a mistake to regard myth as fiction as it would to regard poetry as fiction. Though mythology, in general, should not be taken as a physically literalist account of reality, it nonetheless can provide real insight into the meanings behind reality. Often the symbols of mythology seem bizarre or even grotesque to those outside a given cultural milieu, which is why in general it is best for a people to pursue the religion that has developed organically with their other local traditions. Nonetheless, the insights behind different myths and mystical visions often speak to universal truths of moral and cosmic significance, and the fact that so many people arrive at them independently suggest that they are truths as real and valid as those of mathematics or philosophy.

Our need to interpret is not so much a compulsive task we perform but rather our mode of being. As such, it is eminently natural for us to seek answers to ultimate questions, which is the religious impulse most broadly considered. Even atheists are religious in this sense, for they too claim to have a theory that answers ultimate questions. The tendency toward religion, which is unique to humans, is not simply some arbitrary biological impulse accidentally tacked on to our nature, but it is the logical conclusion, apex and fulfillment of our essential nature as human beings, which is to constantly interpret our experience. Any attempt to suppress the religious impulse would show disdain for human nature itself in its interpretive aspect, as if the physical literalism of the right brain were the only locus of reality. Ironically, for all their pretensions at intellectual superiority, materialist psychologists would have us regress to the right brain's notion of reality, dismissing the left brain's imaginings as wishful thinking. It is true that the left brain is capable of error, but this is not sufficient cause for refusing to speculate altogether. We cannot be so afraid of being wrong that we will not hazard to imagine what the truth may be.

The imagery of poetry, religion, myths, and imaginative dreams need not be taken literally, but it would be a mistake to infer that there is no truth in them, as if truth were to be found only in physical reference, and not in intensional meaning. If philosophical materialism were to be taken seriously, we would be left with an impoverished life, devoid of poetry, religion, or art, those useless things that make life worth living. Indeed the idea of a "life worth living" is unintelligible in a materialistic context, except perhaps by some bland calculus of pleasure and pain. I know that atheists do not lead bleak lives, though they say bleak things, which proves to me only that atheistic materialism is merely a position to hold in an argument against a particular religion or worldview, not something that many people really believe. How could it be otherwise, since, being human, it is in everyone's nature to find meaning in all experience?

These interpretive functions and their cultural manifestations have no analog in any other animals. They are distinctively human. Interpretation on the conceptual order belongs to the rational intellect, which requires verbalization in order to give interpretations definite form. For this reason, the intellect may be considered most intimately related to the speech center, though it would be improper to say that interpretation is located there, since the act of interpretation, being conceptual, cannot be confined in space. The materialist, nonetheless, shows little concern for the categorical coherence of his ontology, blithely assuming that philosophy must conform to neurology, and therefore holds that even interpretation can be mapped onto the brain. For those of us who comprehend something of the ontology of concepts, a strictly neurological approach to the mind not only fails to provide many answers, but it does not even understand the questions.

[For further discussion, from a quasi-materialist perspective, see: Neurodynamics of Personality, by Grigsby and Stevens; Understanding Consciousness, by Max Veleman. The discerning reader should now be able to distinguish between what is empirically established and the faulty philosophical interpretations of scientists.]

Rational Will and Its Freedom

The split-brain experiments, the Dartmouth study, and other investigations have shown that our interpretive faculty is capable of inventing rationales for our actions that are unrelated to their actual unconscious motivations. In the cases of false memories and rationalizations of intent, the subjects seem thoroughly convinced that their interpretation of their experience is correct, even though we know empirically that it is false. If a person can be so thoroughly deceived about the motivation for his decisions, might not rational free will be an illusion after all?

First, we must note that the falsely interpreted processes discussed above were all extrinsic to the interpreting rational consciousness. This means we have only shown that the will does not control everything in the mind, not that it controls nothing. Further, if the rational will were truly impotent to do anything besides give post hoc rationalizations of what the brain does, it would be biologically useless, making nonsense out of any attempt to give the will an evolutionary explanation. One may argue that interpretation helps us to know what to do in future scenarios, but this knowledge is only useful if it can be translated into action, which would mean that the rational will can be an agent after all. Although the rational consciousness may not always correctly interpret the actions of the unconscious mind, it is fully competent to apprehend the free nature of its own volitional acts, and then see its will translated into action.

Understandably, the reality of free will is a subject of acute personal interest to human beings. After all, if free will is an illusion, I am effectively an illusion. In other words, if I have no real control over any of my thought processes that produce actions, but only rationalize them post hoc, why should I (or anyone else) be taken seriously? Evidently, the rational consciousness is just an impotent bystander inventing plausible interpretations of why the sensitive faculties do what they do. Instead of the chief executive of the body, the will is just a spectator. If this were really the case, we should not trust any of our ratiocinations about external reality, including our interpretations of scientific experiments suggesting that free will is an illusion. A denial of free will is a thoroughly self-stultifying position. That being said, shall we not at least consider the evidence?

The first thing we must consider is the integrative function of neurological systems and their associated psychological faculties. We have already discussed the integrating role of consciousness, which takes disparate sensations and brings them together into a unified experience. The philosopher Henri Bergson would also add that consciousness is integrated across time, not merely in the sense of integrating sensations over some finite time frame and making it seem like "the present," but rather our very sense of self is integrated over time. I see myself not as a succession of events, but as a single continuum that is constantly unfolding or unrolling. Each experience adds to me without taking away from previous experience. For this reason, the very notion of subjective consciousness entails an integration over time.

Yet integrative functions can also be found among the sensitive faculties. The sensosomatic cortex, for example, needs to be able to distinguish signals from arms versus legs, etcetera, and put them together in some virtual spatial distribution. It is only when the raw sensations are linked to our spatial processing that we can begin to make sense of them. Without such integration, we could only have primitive responses to stimuli; e.g., "the signal from this part is pain; move that part (whatever it may be)." In order for an animal to have a sense of which body part is giving a sensation, it needs some spatial matrix in its mind. As we have noted earlier, this is similar to the Kantian claim that space is just the form of our sensibility.

Human beings have a subjective consciousness associated with their frontal cortex. The conscious mind integrates the results of the various sensitive faculties, though it is often blind to the lower processes themselves. Our conscious minds do not see how the optic nerve or visual cortex process sense data; we are only presented with a spatially integrated result. For sensitive faculties, consciousness is often blind to process, seeing only results.

Several experimental investigations in neuropsychology have suggested that this might also be the case for voluntary action! In other words, we only see the outcome (what has been decided), but the real decision-making process is entirely unconscious. This is a dubious analogy with the sensitive faculties, for while I admit that I cannot directly perceive how my eye works, I do directly apprehend my will in action, if I can apprehend anything at all.

In the case of the sensitive faculty of vision, I may subjectively perceive that light goes directly through my eye to my mind, as if there were no intermediary, because I do not perceive the intermediary. Thus I think that I see when I open my eyes, as if the images came directly into my mind. I am blind to the processes of the retina, optic nerve, visual cortex and visual association cortex, and only see the end result of these processes. Perhaps I am also blind to most of my volitional faculty as well, and see only the end result.

Such an analogy is flawed, however, because sensation is passive, in the sense of being something I receive, while volition is an active faculty emanating from my subjective consciousness. While it is possible for many intermediaries to intervene between external stimuli and my subjective experience of sensation, no such possibility exists for volition, since my subjective consciousness is its source, and I clearly perceive the initiation of my own act.

If "conscious me" is actually the slave of some hidden "me" deep within my brain, this would imply that "conscious me" has no responsibility for any deeds or thoughts, and is somewhat superfluous except as a sort of circuit breaker to override what was decided behind the scenes. This is precisely the role of conscious volition suggested by experiments that have been interpreted as disproving the agency of conscious volition. The results suggest that we only have a 100-150 millisecond window to make such an override. This flies in the face of experience, however, as we can resist urges for extended periods of time. It would seem that these discoveries only apply to certain classes of volitional acts, or perhaps to acts that are not truly volitional at all.

Most of the supposed experimental tests of free will do not examine rational volition, but a sensitive appetitus elicitus. Subjects are asked to perform tasks such as pushing a button or moving their hand whenever it pleases them, or whichever hand pleases them, without any rational basis for preferring one over the other. This is supposed to be the quintessential measure of a free act, since it is absent of any extrinsic constraint. This notion of freedom is informed by modern liberal notions of liberty emphasizing the lack of external constraint, but has nothing to do with the freedom of the rational will, which is free because it is posterior to understanding. The "free will" of the free will experiments is nothing more than sensitive appetition, which is not free, but is determined by endogenous conditions in the animal mind. The great philosophical and religious traditions have recognized that man is least free when he does what he pleases, for then he is a slave to his appetites or animal nature. The free will experiments, as we shall see, prove only that the sensitive appetite is not free, and when we have no rational basis for our decisions, we become slaves to our sensibility.

The German researchers Kornhuber and Deecke (1965) were the first to observe neurophysiological potentials preceding voluntary action. Subjects were told to move their wrist at a time of their own choosing. Using EEG, the researchers found a negative electrical potential consistently appeared 800 milliseconds before a subject moved his wrist. It remained uncertain, however, how much time there was between the motion of the wrist and the time at which the subject was aware of his decision. A more precise method of measuring the time of conscious decision was needed to confirm the finding that this "readiness potential," as it would come to be called, unconsciously predetermined outcomes of apparently voluntary acts.

In 1977, the neurophysiologist Benjamin Libet (1916-2007) conducted a more sophisticated version of Kornhuber and Deecke's experiment, in what we be the beginning of a long career exploring the nature of free will. Libet was able to measure the time of conscious decision-making more precisely by having subjects sight a revolving oscilloscope tracer and identify its position on a clock face. This method removed any delay caused by the time needed to communicate verbally or execute a motor response. With this method, Libet was able to determine that the readiness potential (RP) preceded conscious decision-making by 500 milliseconds, or half a second.

It should be noted that Libet's experiment actually puts a lower bound on the time between reaction potential and conscious decision-making. If some time is required for the subject to process visual information from the oscilloscope, that would mean his consciousness is actually seeing the tracer as it was in the recent past (say 50 milliseconds ago), which would mean the actual time of conscious decision-making was slightly later than what is reported (in this case 550 milliseconds after RP). Thus, if anything, Libet's experiment may slightly underestimate the delay between RP and conscious decision-making, so there is no getting around the reality of at least a 500 millisecond delay.

Yet there was more to the story. In a later experiment, Libet asked subjects to consciously veto any urge to move their hand at the last instant. When subjects did this, the readiness potential dissipated. It seemed that subjects only had a window of 100 milliseconds to veto the reaction potential; any longer delay and the movement would occur.

Libet interpreted these results to mean that conscious volition is limited to vetoing actions proposed by the unconscious mind. Such an interpretation is philosophically problematic, since there is not always a clear distinction between doing and not-doing. Sometimes "not-doing" something entails "doing" the contrary. Libet also failed to consider that his experiments might have highly limited applicability, only to certain classes of decisions. Certainly, the "decision-making" involved in this experiment was fairly mindless. The rational decision to participate in the experiment according to the rules proposed had already been made. Now, the subject was relegated to imposing an arbitrary time delay on his actions, with no basis for choosing other than his sensitive appetite. It is hardly surprising, then, that the conscious mind in this scenario merely reacts to urges proposed to it.

These and later experiments invited a host of diverse interpretations. At one extreme, deterministic materialists argued that the experiments disproved the existence of free will, or at least relegated it to a rubber stamp of unconscious decision-making. At the other end, many challenged Libet's timing method (a criticism we have already addressed), or suggested that readiness potential corresponded to a quickening of attention, rather than an actual determination of outcome.

Others suggested that the 500 millisecond delay might be explained as the time required to "report the awareness" to oneself linguistically. This appeal to metacognition, here interpreted as "aware of being aware," is logically problematic. If I am not aware that I am aware, how can I be said to be aware at all? One could say that I am "aware" of sensory input, in the way the irrational sensory cortex of an animal may be said to be "aware," but this just another way of describing what is ordinarily called the "unconscious" or "subconscious" mind, so such a position hardly differs from that of Libet except in nomenclature. Yet such "unconscious awareness," if we accept such a malapropism, can only relate to extrinsic stimuli, not to intrinsic action arising from subjective consciousness itself. If I "consciously" make a decision before I am aware that I have made a decision, I can hardly be said to have made the decision consciously. This notion of metacognition as "awareness of awareness" is logically incoherent and unhelpful.

This unworkable notion of metacognition should not be confused with the more coherent notion of "thinking about thinking," that is, regarding a thought qua thought as an object of thought. It is conceivable that there could be a time delay between making a conscious decision and then thinking to oneself verbally, "I have made a decision." Yet this does not deny that we were aware of the decision at the time it was made, even before we reflected on what we had done. Further, Libet's experiment did not require any such metacognitive self-reflection; the subject only had to recognize the tracer's position at the time of being aware of the decision. If we deny that we can be aware of a decision at the exact instant we make it, we effectively adopt Libet's position that the moment of decision precedes the moment of consciousness.

Libet himself was unwilling to abandon a belief in free will. He recognized that the freedom of our conscious will to perform acts is the underpinning of moral responsibility. If it is not a free, conscious will that causes our apparently voluntary acts, then it is senseless to punish criminals for acting badly, or to reward people for acting well. Deterministic materialists, we have noted, ignore the self-stultifying consequences of denying free will. Foolishly spitting into the wind, they deny the spiritual aspect of the human being even though this entails that they themselves are little more than dumb beasts, with no more right to life or liberty than a nematode.

As Isaiah Berlin observed, in the absence of free will, our consciousness would be a prisoner, helplessly observing events it is powerless to control or direct. One textbook in the philosophy of mind, written by a functionalist, responds to this criticism with the example of a woman sunning herself at the beach. A chain of unconscious processes prompts her to become aware of her pleasure, and to make her desire to turn over to the other side. Another chain of unconscious processes gratifies this desire, and she receives a further experience of pleasure. The author concludes rhetorically: "And she is supposed to be in a prison?" This line of argument illustrates the bestial level of freedom understood by functionalists and others who see no need for free will. To them, the experience of animal pleasures is sufficient recompense for real freedom. Like the fleshly-minded people disdained by the great philosophical and religious teachers of history, they would enslave themselves to their sensibility rather than live according to volition that follows understanding. The pursuit of pleasure is what Aristotle called "the life fit for cattle," and even the utilitarian John Stuart Mill admitted that it is better to be a dissatisfied human than a satisfied pig, yet the summit of wisdom for deterministic materialists is to admit that no higher life for human beings is possible or desirable.

Since Libet appreciated the absurdity of denying free will, he sought an account of free will that would be consistent with his experimental findings. His interest in the subject had been aroused by his observation that an electrical stimulus in the brain needs to persist for at least 500 milliseconds in order to become conscious of it. This time period coincided with the delay he would observe between reaction potential and conscious decision-making. These results suggested that perhaps conscious experience somehow lags actual decision-making by a half second. Libet postulated that we subjectively backdate our consciousness, so that it seems to us that we are deciding a half second earlier.

Yet such a supposition entails serious temporal and causal paradoxes. How can I subjectively go back in time? It is one thing to say that I experience sensations that were received earlier, but nothing short of time travel, and all the paradoxes it entails, would allow me to retroject my subjectivity back in time. For example, suppose, during a match of tennis, an approaching ball prompts my subconscious to build a readiness potential that "decides" to return the volley. It takes, say, 50 milliseconds to process the visual information, so the readiness potential is actually "deciding" about something than happened in the recent past; no paradox there. Now, we further suppose that I become consciously aware of the reaction potential's decision another 500 milliseconds later, consistent with Libet's experiments. According to Libet's postulate, I will subjectively experience that I had made the decision 500 milliseconds ago. Yet this postulate is contradicted by the fact that the subjects identified the moment of decision-making a half-second after RP, according to their sighting of the tracer. Thus they are conscious in real time, as indeed, it could not be otherwise, since the whole concept of "now" is defined by subjective conscious experience. If we are conscious in real time, we cannot subjectively experience living a half second ago any more than a year ago, except as a memory. I might somehow project the sensory experiences of the recent past into the present, but I cannot retroject my present consciousness into the sensory matrix of the past; that would be time travel, pure and simple.

Further, Libet's assessments of the time required for consciousness certainly do not have general applicability, as they are contradicted by a host of ordinary voluntary actions that require a much faster response time. In the example above, a half second delay would make for incompetent tennis play, and it is hardly credible that voluntary decision making is not involved there. Even something as ordinary as driving requires constant vigilance and split-second reactions and judgments in real time. Libet's contention that an electrical stimulus must persist for half a second in order to become conscious of it is routinely contradicted by simple actions such as rapidly tapping on a table and sensing it each time, or, to take a more voluntary action, typing text, which involves voluntary and subconscious processes. I decide what I want to write, and my learned sensory motor skill of typing kicks in, and I see what I am typing, in comparison to what I am thinking, much faster than half a second. I can catch typos and correct them more quickly as well. Libet was wise not to interpret his experiment as denying free will, as the evidence of rapid voluntary action is superabundant, and it would be absurd to dismiss a world of evidence on the basis of a putative experimental result. Yet his attempts to harmonize free will with his results were unhappy, forcing him to make ever stranger suggestions.

Later in his life, Libet proposed a conscious mental field (CMF) in order to reconcile the reality of free will with his experimental findings. He found it paradoxical that human beings can rapidly make free decisions even when they evidently do not have enough time to respond to neural phenomena. This seemed to imply that consciousness can act even before it receives neural signals, which would require the supposition of a non-neural aspect to consciousness. This was a big leap for Libet, who in his youth had believed in deterministic materialism. Ironically, his studies, which were used by many to support psychological determinism, led Libet himself to turn to an account of the mind that was not entirely reducible to neural activity. Neural activity may be necessary, but it is certainly not sufficient for human consciousness, so Libet postulated a mental field that transcends neural activity and can become consciously aware of phenomena that are neurally disconnected. Neural connections were needed for sensory and motor activity, as well as information processing, memory, emotion, and other sensitive faculties, but Libet held that subjectivity came from the CMF. He still retained the modern prejudice against any spiritual notion of the soul, erroneously equating it with the straw man of "dualism." Instead, he saw the CMF as a physical phenomenon emerging from brain activity, yet greater than the sum of all neural activity. We will later examine the notion of "emergent phenomena," as used in the philosophy of mind, more critically.

Strangely enough, Libet was able to propose an empirical test for his hypothesis of an ethereal conscious field. A slab of living cortex could be surgically isolated from the rest of the brain, and given its own independent blood supply. One could then test whether the patient had subjective consciousness when the cortex was electrically stimulated. This would prove that consciousness does not depend utterly on neural connectivity, corroborating the CMF hypothesis. The ethical legitimacy of such an experiment is dubious, to say the least. Libet suggested that the experiment could be conducted ethically on epileptic patients, who sometimes require such major surgery in order to prevent seizures. Nonetheless, he never made much effort to attract a neurosurgeon to participate in such a study, and the experiment to this day remains unrealized.

It is perhaps just as well that the experiment is not conducted, for the existence of the rational soul does not require that it be completely independent of neural activity. On the contrary, we have already specified that our rationality does have extrinsic dependence on the sensitive soul, whose activities are manifested neurally. Libet's rash hypothesis of a mental field that transcends disconnected regions of the brain was made necessary only by his own overestimation of the implications of his experiments. We should examine these findings more critically before we construct drastic solutions to illusory problems.

The experiments described thus far do not abolish the reality of conscious free will, but prove that urges to act can be initiated at the unconscious level, by instinct, habit, conditioned response, sensory stimulus, or whatever else is the basis of the "readiness potential." This finding agrees with our subjective experience, where we find that we do not always initiate actions "in a vacuum," but first feel urges to which we give our assent or denial. Using classical terminology, volition acts upon what is proposed by the appetites ("readiness potential"). A distinction between rational volition and sensitive appetition helps resolve some of the paradox. Modern scientists tend to define "decision-making" in terms of determining an outcome, but outcomes can be determined in different modalities: freely or unfreely, consciously or unconsciously, rationally or irrationally. There is no evidence that the reaction potential's modality of determining outcomes is rational, that is, acting posterior to understanding. Indeed, the nature of the experiments precludes such a possibility, since no rational basis is given for the subject to act at one time rather than another. It would seem we are testing appetition rather than volition.

In a true test of rational free will, we could easily circumvent the supposedly small window of time we have to respond to our unconscious mind's decision. I could decide to count to twenty (or any arbitrary number) before pressing the button, so the decision would be made long before the RP manifests itself in the final half-second. Alternatively, I could decide to count to some number after feeling the urge, artificially lengthening the delay between RP and action. More critically, if I am asked to solve a complex problem or to express my understanding of a topic, the whole notion of reaction potential would be inapplicable, as my activity starts from within, and I clearly apprehend every step of my reasoning on the conceptual level. If ratiocination were determined strictly by physiological processes, independently of the logical relations among the terms represented, we could have no guarantee of logical validity for any argument. Intellectual activity would be worthless. It is our freedom to act in accordance with genuine understanding that guarantees our ability to make judgments of logical soundness. The most perfect manifestation of free volition is that which is informed by the rational intellect.

Libet and other modern researchers, by contrast, have assumed the modern liberal concept of "freedom," which is to do as one pleases without external constraint. To act in such a way, however, is to make oneself a slave of sensitive appetites, which are not free. Experiments on arbitrary appetitive action provide important evidence that indeterminacy or randomness is not the same as truly free self-determination. Even frogs are capable of indeterminate appetition, arbitrarily or randomly choosing whether to jump left or right, but they are not free on this account. Sensitive appetition is indeterminate in the sense of not being determined by extrinsic phenomena, yet it is not truly free, since it is determined by the endogenous conditions of the animal mind.

Over a century of psychological research has shown that the unconscious mind is capable of much more sophisticated activity than was thought possible for most of history. The fact that reaction potential and its auxiliary processes operate unconsciously need not imply that they are "mindless," nor even that they are separate from the self. The unconscious mind, in a way, has as much a claim to be an aspect of "me" as the conscious mind. If the unconscious mind has a share of selfhood, perhaps people should be held responsible for their unconscious actions. This is not so unjust as it seems, since our unconscious habits may be shaped by our conscious behaviors, and we are responsible for failing to override these habits. For example, we are rightly held culpable for actions performed in a state of drunkenness, indeed more so, since we are responsible for developing the habits that led to such a state. Still, if consciousness were utterly impotent with regard to unconscious decisions, it would make no sense to appeal to the conscious mind to hold behavior in check. Thus any claim to selfhood for the unconscious mind only makes sense insofar as it is under the oversight of a conscious mind.

While the philosophical objections against identifying the reactive unconscious with volition are sufficiently substantive, in recent years there has also arisen experimental evidence that reaction potential and related constructs are measuring something other than ordinary volition. This first became clear, ironically enough, with an fMRI study that proved the existence of a delay much longer than Libet had ever measured.

In 2008, Chun Siong Soon, Marcel Brass, Hans-Jochen Heinze and John-Dylan Haynes published the startling results of their investigations into conscious decision-making. They found that they could predict the outcomes of decisions with good reliability up to ten seconds before a subject was aware of deciding. Since they used fMRI rather than EEG, they were able to examine other areas of the brain, which exhibited activity even earlier.

The subjects of the study were instructed to press either the right or left button at a moment of their choosing. To measure the time of conscious decision-making, they named the letter that was displayed on screen at the moment they were aware of their decision. As expected, the primary cortex and the supplementary motor area (SMA) - the source of Libet's RP - showed activity prior to conscious decision-making. Intriguingly the SMA had activity that statistically predicted decision outcomes (left or right button pressed) five seconds in advance. Use of fMRI permitted examination of the frontopolar cortex and precuneus, which exhibited activity seven seconds in advance of conscious decision awareness. Taking the fMRI delay into account, this meant the activity was actually ten seconds prior to conscious decision-making. The unconscious activity predicted decision outcomes with an accuracy of 71%.

Unlike the Libet experiments, this study had no constraints whatsoever on subject timing. The subjects were free to tarry as long as they liked before pressing a button. This allowed more time for buildup of attention or whatever else is the basis of readiness potential, and may account for the longer lead times observed.

The sheer length of the lead time forces us to re-examine conventional interpretations of the Libet experiments. While it might seem plausible to some that real decision-making occurs a half second before consciousness, it is not even remotely believable that we can ordinarily function with a ten second delay between actual decision-making and conscious awareness of our decision. We should critically examine the belief that reaction potential signifies an act of decision. After all, it only predicts specific outcomes a fraction of the time; it is only fully reliable for predicting that an act will occur (unless it is repressed at the last instant), not what the act will be. It seems that those who characterized the readiness potential as a sharpening of attention (in the broad modern sense of focusing on certain sensations or motor processes, not necessarily rational consciousness) were close to the mark. RP seems to be highly reliable at determining when a decision will occur, but is only moderately reliable at predicting the outcome.

We have reason to expect that RP will only precede certain kinds of decisions. After all, it takes place in the motor region, so we should expect it to be linked to preparation of motor activity. In order for an action to immediately follow a conscious decision, unconscious processes must prepare the signals necessary to execute the motor action, so that everything is ready for the act of volition to "give the order," either an approval or a veto. We should note that in all cases, the motor action itself always comes after conscious volition, which suggests that volition does indeed play a real causal role, much as we experience.

The fact that RP and other unconscious phenomena precede a moment of conscious decision by a consistent time interval suggests that our unconscious mind plays a significant role in determining the timing of our decisions. This is not so strange as it may seem, since, after all, our sense of time duration is a consequence of our sensibility, not our rationality. The rational realm of pure concepts is timeless, but it is because we are bodily creatures immersed in time that we must consider concepts sequentially, and a certain amount of time must pass in order for us to manipulate the symbols we use to represent concepts. Since time pertains to the form of our sensitive intuition, it is only appropriate that the sensitive faculties should determine the timing of our actions.

This does not mean the rational consciousness can play no role in timing. On the contrary, it may give instructions to the sensitive faculties to delay a certain amount of time, such as by counting off seconds or waiting for some external signal. The experiments discussed so far, however, all share the design feature that there should be no rational basis for timing, but only random delay. With rationality out of the picture, it is perhaps unsurprising that the delaying action should be completely relegated to the lower faculties.

Notwithstanding the criticisms noted above, the 2008 study by Soon et al. was regarded by some of the authors and many others to constitute proof of strong determinism. Of course, if we are to take them seriously, they had no choice but to believe this, so we should not fault them for their philosophical incoherence any more than we should blame a dog for barking. Their utterances carry about as much weight as those of a parrot. If it should be objected that I am merely jesting or being rhetorical, I should remind the reader that I am considering the actual beliefs of psychological determinists at face value. This is an argumentum ad hominem in the classical, and logically valid, sense: arguing based on assumptions that I do not necessarily hold, but the person I am arguing against holds.

If it is said that most psychological determinists do not believe that they are irrational parrots, so much the better. Perhaps the most potent argument against the denial of free will is that even the advocates of this hypothesis do not truly believe it. If conscious volition is nothing but an innocent spectator, helplessly ratifying what was decided unconsciously, it makes no more sense to punish a human being for committing crimes than to punish a dog for barking or a fish for swimming. Much less should haughty intellectuals disparage others for being "intolerant" or "fanatical" or committing any other arbitrarily defined sin against the spirit of the age. If a person truly cannot help what he does or what he is, it makes no sense to condemn him for it. Of course, materialist intellectuals are just as partisan as anyone else, and just as unforgiving of crime. Most tellingly, they expect their own thoughts and actions to be taken seriously, as if free will were a fiction only for people other than themselves.

In any event, the entire empirical edifice upon which the denial of free will is precariously situated has been nearly brought to ruin by the recent experiments conducted by Judy Trevena and Jeff Miller of New Zealand (2009). The results of these experiments proved, among other things, that motor preparations such as those observed by Libet and others did not determine the choice of whether or not to move. In order to fairly assess the implications of their findings, we should closely examine the method and results of the Trevena and Miller study.

The experimental setup was similar to that of Libet's famous investigations, but with some important differences. Subjects were shown a circling dot on a clock face to simulate the conditions of the Libet study, but they did not get to choose the time of action. Instead, a randomly-timed tone was played. In one experimental condition, subjects were told to strike a key whenever the tone sounded, while those in another condition were told to do so about half the time, as they saw fit. This removed the time-delay aspect of decision making, leaving only a pure decision of whether to act or not act. The study also examined what previous researchers called "lateral readiness potential" (LRP), a neurological phenomenon preceding decisions to use the left or right hand. The study was divided into two experiments: in the first experiment, subjects in both conditions were specifically told when to use their left or right hands, while those in the second experiment were allowed to choose which hand to use each time they pressed a button. In both experiments, actions were timed by audible tones, with some subjects instructed to always press a button when hearing a tone, while others were told to do so about half the time.

The exact instructions to subjects in Experiment 1, in the second condition (push a key only half the time), were as follows:

At the start of each trial you will see an L or R, indicating the hand to be used on that trial. However, you should only make a key-press about half the time. Please try not to decide in advance what you will do, but when you hear the tone either tap the key with the required hand as quickly as possible, or make no movement at all.

The subjects in this condition actually did a fairly good job of pushing a button about half the time, though they were notified if they deviated from 50% by a wide margin. There was no evidence that subjects in either condition consciously made their decisions in advance, so the degree of compliance was high.

EEG detected a readiness potential before the tone sounded in all subjects. Those in the first condition made no decisions at all, since they were told when to push a button and which button to push, yet they still had RP before the tone. The magnitude of the RP negativity measured by EEG was the same whether the subject was in the "always move" or "sometimes move" group, and it was the same whether a subject in the latter group pushed a key or not. In short the same RP was present before a tone, regardless of whether a decision was about to be made, and regardless of the content of that decision. This result indicates that RP is not a decision-making phenomenon. It might instead be a sharpening of awareness or general motor preparation, since the subject obviously had no way of knowing in advance exactly when the tone would sound.

The researchers also measured subject reactions with an electromyogram (EMG), to detect vertical eye movements and arm movements (with electrodes between the wrist and elbow). In Experiment 1, the EMG was negligible before the tone in all conditions, as well as after the tone when there was no movement. EMG was observed only after the tone with movement, and it was of greater magnitude in "always move" subjects than in "sometimes move" subjects who made a movement.

Lateral readiness potential (LRP), which has a different electrical characteristic and a different brain region than ordinary RP, started only 100 milliseconds after the tone. There was never LRP before the tone, confirming that subjects were compliant and did not make their decisions in advance. The LRP was measurably different between trials with and without movements. This experiment did not assess the role of LRP in choosing between the left and right hands, since the subjects had no choice in the matter.

In Experiment 2, Trevena and Miller explored whether LRP predicts which hand a person will choose, as previous researchers seemed to have found. Subjects in this experiment were given the following instructions:

When you hear the tone, please quickly tap with whichever hand you feel like moving. Please try not to decide in advance which hand you will use, just wait for the tone and then decide.

Again subjects were divided into two conditions, with one group always making a hand movement after each tone, and the other group doing so only about half the time, at their discretion.

The subjects in Experiment 2 also showed a high degree of compliance, and a good ability to balance their decisions evenly.

In the sometimes-move trials with a tone, participants used their left hand in 33% of trials, their right hand in 33% of trials, and made no movement in 34% of trials. In the always-move trials, 49% of movements were made with the left hand. Analyzing the frequency of consecutive decisions to move again suggested that participants were able to make these decisions more or less at random.

I would qualify the last observation, for "at random" here simply means that outcomes are erratically distributed, following no discernible pattern. It is beyond the scope of this type of experiment to determine if human decisions are truly random (absolutely indeterminate), and not just pseudo-random. Random and free are not the same thing; an electron, for example, appears to exhibit "random" behavior, but this is not the same as being a free, self-determining subject. For the purposes of this study, it suffices to show that decision outcomes do not follow a numerical pattern, so that each decision is more or less statistically independent of the others and each trial can be treated as an independent data point.

An interesting general finding was that reaction times were on average 52 milliseconds slower in Experiment 2 than in Experiment 1, perhaps reflecting the time needed to decide which hand to use. In Experiment 1, the subject received a visual cue determining which hand to use, so less decision-making was involved there.

EEG detected readiness potential negativity more than a full second before each tone, independent of whether the subject decided to move. EMG detected motor activity only after the tone, and only in trials where a movement was made, about 150 milliseconds later. The EMG reading was the same for subjects moving in the "sometimes move" and "always move" conditions. These results suggest that RP is independent of decision outcomes, but rather is a sharpening of attention or motor preparation, and that movement was not initiated until after conscious decision-making.

Lateral reaction potential could not be defined for trials without movements, since the subjects did not choose which hand to use unless they had decided to make a movement. LRP was always absent before the tone, and positive after the tone in trials with movement. The magnitude of LRP was the same in both "always move" and "sometimes move" conditions. LRP started slightly later than in Experiment 1, consistent with the longer reaction times observed.

According to Trevena and Miller, the absence of LRP before the tone implies that conscious decisions about which hand to use involve more than just going along with the brain's unconscious preparation. However, it should be noted that LRP in fact remains a good predictor of lateral decision outcomes. The lack of LRP before the tone only proves that lateral decision processes, be they conscious or unconscious, were not made until after a tone informed the compliant subject that a lateral decision now ought to be made. Granted, the prompt action of the subjects, a mere 50 milliseconds slower than in Experiment 1, leaves practically no time for unconscious processes to precede conscious decision-making.

Taken altogether, the Trevena and Miller experiments seem to completely overturn Libet's findings. Once the timing variable was eliminated by use of an audible tone, they found no unconscious determination of whether to respond or which hand to use. Why, then, we may ask, had previous researchers found substantial delays between readiness potentials and conscious decision-making?

In the case of ordinary RP, Libet may have found a mere sharpening of attention. This would account for why subjects in his later experiments were able to override the urge to move, dissipating the RP. Libet's readiness potential did not causally determine outcomes or even whether a decision would be made. At best, it was an unconscious random or pseudo-random delaying mechanism for determining when to feel an urge to move. Yet it was only conscious decision-making that determined whether this urge was actualized or rejected. In Trevena and Miller's experiments, RP existed before the tone, even though the subject was given no choice in the timing of a decision. This suggests that RP is some sort of anticipatory motor preparation, yet one that can be overridden.

The findings regarding LRP are perhaps more surprising, as Trevena and Miller did not detect anything until after the tone, even in Experiment 1, when the hand to use was already known in advance. Thus LRP, unlike RP, is not a sharpening of attention or general motor preparation, but is closely tied to the immediate preparation and actual execution of a motor response. This would account for why it is such a good predictor of lateral outcomes. However, when one takes away the random timing delay, it predicts outcomes only trivially, since it is simultaneous with or posterior to conscious-decision making.

Intriguingly, in Experiment 1 subjects took 33 milliseconds longer to respond in the "sometimes move" condition than in the "always move" condition, suggesting that this was the amount of time it took to consciously decide whether or not to move. This interpretation is strikingly consistent with the finding that Experiment 2 responses were 34 milliseconds slower than Experiment 1, suggesting that this was the amount of time needed to decide which hand to move. All in all, we have some remarkable corroboration that the subjects, as instructed, made their decisions consciously, voluntarily and spontaneously, in a thirtieth of a second.

Before we dismiss any notion that unconscious processes determine outcomes, it should be remembered that Soon et al. were able to anticipate conscious outcomes by several seconds with 71% accuracy. This is not enough to show that decisions are made unconsciously, but rather suggestions that may influence outcomes are made unconsciously. This is nothing other than what we call appetite, and psychologists have been aware of the powerful influence of unconscious desires since Freud. However, these desires are of a different character than conscious volition, which may approve or override them at its own discretion. The will is certainly not a mere rubber stamp on what the unconscious has decided.

We should not, perhaps, impose too sharp a dichotomy between conscious and unconscious. The unconscious processes observed by Soon et al. may be seen as part of the material structure involved in shaping conscious decisions. There is nothing too remarkable in such a supposition; after all, we must be informed in order to make a decision, and we rely on unconscious processes to shape the thought-images that inform our decisions. None of this, however, is to deny the sovereignty of the conscious will, which is clearly demonstrated in the Trevena and Miller experiments.

It is still conceivable that Libet's experiments really did detect unconscious decision-making, and somehow the introduction of an imperative tone annihilated this process or disconnected it from conscious decision-making. On such a supposition, however, it would have to be admitted that Libet's results only apply to limited classes of decisions, and that there are truly spontaneous conscious decisions such as those observed by Trevena and Miller. The 2009 study definitely showed no correlation between pre-tone RP and decision outcomes (nor even whether a decision was to be made), and no pre-tone LRP at all. It is still plausible that the irrational "decision" of how long to delay a movement is made unconsciously, but the decisions of which we are consciously aware are indeed what they seem to be.

Another possible objection to Trevena and Miller's interpretation of their findings is that the use of a tone broke the link between response preparation and EEG negativity, so the negativity they measured was not truly RP. Previous investigations into reaction potential had found that RP is present in experiments with temporally spontaneous voluntary movements, but when there is an "explicit imperative stimulus" telling the subject when to move, the anticipatory EEG negativity is called "contingent negative variation" (CNV), believed to be distinct from response preparations. Whereas RP is making preparations for a motor response, CNV is simply anticipating the stimulus, in this case the tone.

Trevena and Miller responded to this objection by citing evidence that CNV, like RP, is in fact sensitive to response preparations.

For example, CNV amplitude increases with the number of key presses that will be required for an upcoming response (Schröter & Leuthold, 2008) and with the degree to which response characteristics (force, direction, etc.) have been specified in advance (e.g., [MacKay and Bonnet, 1990] and [Ulrich et al., 1998]). It also increases if the response is to be executed immediately rather than after a 1-s delay (Krijns, Gaillard, Van Heck, & Brunia, 1994) and if participants must actually execute the response rather than merely imagining its execution (Bonnet, Chiambretto, Decety, & Vidal, 1998). The dependence of CNV amplitude on response characteristics clearly shows that it is not driven entirely by stimulus anticipation. Finally, and perhaps most relevant in the present context, CNV amplitude is much larger preceding a stimulus that will require a motor response than preceding an equally informative stimulus that requires no overt response (e.g., Van Boxtel & Brunia, 1994). In the present experiment, then, CNV should clearly have been larger if the brain were subconsciously preparing to respond rather than to withhold the response.

CNV, like RP, does involve response preparation, so the lack of change in negativity across conditions in the Trevena and Miller experiments shows that these preparations are not correlated to conscious decision outcomes. The will truly does decide whether to respond or to withhold the response. Trevena and Miller cite Brunia (2003) as arguing that "the CNV is also a movement-preceding negativity (MPN), just as the Readiness Potential (RP). The RP reflects processes involved in the preparation of voluntary movements, and the CNV reflects processes involved in the preparation of signalled movements."

Brunia's nomenclature of "voluntary" and "signalled" movements, like much modern psychological terminology, is philosophically misleading. "Signalled" movements can be properly voluntary, as is the case in the Trevena and Miller experiments, or indeed any ordinary action where we choose to respond to an agreed upon signal. The "voluntary" movements anticipated by RP might more properly be called "temporally spontaneous" or "unsignalled." There may be other kinds of signals besides time indicators that can specify the parameters of a decision, by which we might distinguish other kinds of anticipatory potential. It is possible that there is no strong natural distinction among RP, CNV, and other kinds of anticipatory potential, but rather we classify these different manifestations of the same thing according to the type of behaviors they anticipate.

Despite their finding that conscious free will is in fact a real causal agent, Trevena and Miller were careful to pay proper homage to materialist orthodoxy. They declared their belief that conscious decisions were products of neural activity, but denied that this activity was Libet's reaction potential or lateral reaction potential, or indeed any other unconscious process. Still, they retained a materialist's confidence that conscious free will was a product of some other neural process, yet to be understood.

Neural processes operate on a scale much too large for quantum indeterminacy to play a role, so they are strictly deterministic. How then, can a truly free will be the product of a thoroughly deterministic, mechanistic process, however complex? No amount of determinism will yield true indeterminacy. A complex deterministic system may yield pseudo-random distribution of outcomes, but this is no substitute for a self-determining autonomous agent, such as we experience ourselves to be. A deterministic consciousness is no less superfluous than a consciousness that is enslaved to unconscious processes. Trevena and Miller, like other materialists who claim to believe in free will, seem to want to have things both ways.

Even if neural activity could be the basis of rational consciousness, this would not be the same thing as saying neural activity is rational consciousness, or that rational consciousness is nothing more than neural activity. We would still be faced with the unavoidable fact that rational consciousness operates in the realm of ideas abstracted from material determinations. To suggest that ideas themselves actually are material determinations is an absurdity of the highest order; it is a statement that can be said, but it is impossible to think it while understanding the terms. If, per impossibile, such a statement were valid, the entire edifice of logic would crumble, and with it, the mathematical basis of the physical sciences that might claim to support it. Again, it is a thoroughly self-stultifying position, as if its a priori absurdity were not enough.

While I should like to think that few scientists are foolish enough to equate abstract ideas with their representations, there are surprisingly many who will, on one level or another, speak of thoughts and their material representations as though they were one and the same thing, perhaps on the supposition that thoughts contain no abstract ideas. For this reason, I will briefly elaborate on the distinction between thoughts and their representations.

Thoughts and Their Representations

Suppose that a neural signal, or set of signals, really was a thought, and not just a representation of a thought. Let us say, to give a crude example, that two pulses of certain magnitude and timing from Neuron A to Neuron B means "cat". The pulses themselves cannot be the abstract idea of "cat", nor even the thought of a determinate cat, for there is no intrinsic necessity that such a thought should be expressed as two pulses. It could just as easily have been three pulses, for example. Once we admit that there is more than one possible "code" for neural signals, we effectively admit a distinction between the signals and the thoughts they represent. Consequently, the electrical signals are not an idea or even a thought or phantasm; they can only represent thought-object, just as patterns of ink on paper are not ideas or words or thoughts, but representations of these. For example, the notion of a cat can be represented by the written words: 'cat,' 'chat', 'gato', or 'gatto', showing that the idea of a cat is not confined to any single linguistic representation. Similarly, the ideas and sense-images of which we think are not confined to any particular neural signal representation.

The only way to circumvent this argument would be to contend that there is only one possible correspondence between a set of electrical pulses and a given thought. Even then, we would be faced with the perplexity that the thought has fundamentally different qualities from those possessed by electrical pulses. The qualia of color and sound that we find in thought are not properties of electrical pulses, unless we wish to revert to the medieval belief that these qualia truly do inhere in their objects, not just in the mind, and that somehow they literally enter the mind through our sensory apparatus. If it is argued that these qualia are large-scale properties that "emerge" from many pulses, this is nonetheless effectively admitting a distinction between the qualia and their material basis, which is all we wish to assert at this point.

At any rate, the notion that there is only one set of signals for a given thought is almost certainly false. Different species of animals use similar structures for very different operations, and individuals with brain damage are able to use other parts of the brain for nonstandard purposes in order to recover function. Further, the arbitrary wiring of dendritic spines strongly suggests that individual synaptic connections are seldom indispensable, so different signal patterns may achieve the same result. Further, there is so much continuous variation in pulse size and shape, that it is almost certain that pulses will have the same informational effect as long as they are within a certain threshold. These facts all suggest that pulses are codes representing something, not thoughts themselves.

Going further, we may note that there is no strong physical necessity requiring a mind to use electrical signals to convey information instead of some other mechanism, such as variable pressures in a vascular system. Human technological analogs prove that many possible types of hardware are possible, using various conventions defining the informational content of signals. Since the same thought can be equivalently represented by various media, there is little reason to single out electrical signals as being thought, as opposed to representing thought.

It is not only rational or conceptual thoughts that transcend their representations. Even mere sensory images or phantasms cannot be reduced solely to the electrical signals of the brain. I see the actual image of what I imagine in my mind's eye, even when my eyes are closed and there are no images on my retinas. The images in my mind have no physical presence anywhere in my brain tissue. At most we can find electrical signals that represent these images, but we won't find the images themselves under an electron microscope. Yet I see the images, so they exist somewhere, but not in the three-dimensional space of my brain. In the brain there can only be found signals representing these images.

Some scientists confuse causality with ontology, and think that the fact they can induce certain thought-images by electrical stimulation proves that the thoughts are the electrical signals. By this standard, the fact that my written words can induce people to think certain thoughts proves that the alphabetic characters are the thoughts they represent. This is an absurdity exposed at length by Plato in the Cratylus dialogue.

At any rate, our ability to observe or induce thought via electrical signals remains quite crude. For example, researchers will ask a subject to think of playing tennis, and they will "confirm" this by seeing activation in the cortex used for spatial reasoning. They cannot actually tell he is thinking about tennis specifically; such is the primitive state of this science. Similarly, "induced" thoughts usually involve some crude sensory stimulus or hallucinogenic effect, but the subject remains free to interpret these images as he wishes. Recently, experiments have been conducted where subjects learn to generate crude output by focusing on certain characters or images, and the resulting brain activation is detected by EEG and translated into an output such as typing a message on social network software. Again, this does not demonstrate an identity between thoughts and electrical signals, but only a causal link.

Still, we may ask, if thought is more than electrical signals, why is it always associated with electrical signals, even when a person is just thinking to himself? We have already noted, when discussing psychological faculties on their own terms, that the rational intellect relies on the perceptibles generated by the sensitive faculties in order to represent ideas. This accounts for why there can be no rational activity without activity of the sensitive soul. As for the sensitive faculties themselves, they have even more manifest need of a corporeal medium, since they produce images of the corporeal world from the data of the sense organs. Animal subjectivity, though it is often not rational, is nonetheless integrative, and we have every reason to believe that many non-rational animals experience their consciousness as a unity. Yet the data of the senses is diverse: for example, a split-second image captured by the retina contains a wealth of data. Some apparatus is needed to integrate this data into a single image, and then to perform instantaneous object recognition, and then to allow focusing on one set of objects rather than another, so that the subjective consciousness can deal with the data with relative ease. All of these preliminary processes are integrative, so a signal processing system would be an eminently useful mode of realizing the sensitive faculties. There is no a priori reason why the sensitive faculties must be mediated solely by electrical signals (or some other material representation), but this method is certainly convenient and certainly competent to the task.

An identity between thought and neural signals is problematic even for lower animals, but the paradox is especially pronounced for human beings, who can think in a true language. That is, we can arbitrarily assign abstract concepts to our perceptible thought-objects. This adds another order of representation; electrical signals represent perceptibles, and the perceptibles represent concepts.

Most scientists in the fields of biology and psychology, being methodologically materialist, naively assume that human language must somehow be reducible to the perceptibles in which other animals are conversant, or at the very least that its origin is to be explained in materialistic terms. The origin of human language remains an enigma from an evolutionary perspective, though there is no shortage of post hoc rationalizations, a staple of behavioral evolutionary theorizing.

Evolutionary theory explains physiological traits in terms of natural selection of random variations. Natural selection is basically a utilitarian mechanism, where traits that enhance an organism's survival abilities are more likely to be propagated. It is no accident that Darwin's theory was influenced by classical economics, which models societies in terms of individuals acting in their own self-interest. However, many social behaviors of animals seem to defy this utilitarian calculus, as individuals act in various cooperative, even self-sacrificing ways. It would seem that something more than the desire for individual survival must account for such behaviors. (We cannot expect any help from the other proposed agent of evolution, so-called genetic drift, as this phenomenon cannot explain adaptations, being independent of biological utility.)

In 1896, the psychologist James Mark Baldwin proposed a mechanism that was both consistent with Darwinian natural selection and capable of accounting for various innate behaviors. An animal could learn a useful skill on its own or from other animals. Over time, as many individuals learned this behavior, this would create an environment where biological traits enhancing one's ability to learn this behavior would be favored. Eventually, in some cases, these traits would develop into an innate ability to perform the behavior that was once acquired only by learning. This would mean that behavioral choices made by humans and other animals could dramatically influence the course of evolution, albeit in a non-Lamarckian way.

There are important limitations to the applicability of the so-called "Baldwin effect." The environment has to be sufficiently stable so that the behavior in question remains useful for an extended period of time. Natural selection is a slow process, so much time is needed for variations favorable to a behavior to arise and to propagate. If changes in the behavior, due to environmental or social changes, occur more quickly than natural selection can effect genetic changes favoring the original behavior, the selective pressures will constantly shift in different directions, and there is no way natural selection of physiological traits can keep pace with behavioral changes in order to make them become innate. Furthermore, if the environmental attributes favoring a behavior are not stable and long lasting, it would not be advantageous to make this behavior become innate.

Also, the Baldwin effect can only occur as a Darwinian process if making a behavior innate gives an animal a definite utilitarian advantage over those who must learn the behavior. When we compare innate behavior to the cost of learning by individual trial and error, the advantage seems self-evident. However, with social behaviors, there is much less "cost" to learning, since animals are raised among others of their kind, and tend to learn quickly in their infancy. In fact, some behaviors, such as communication, cannot be performed except in a social context, so it is hard to see much additional cost to learning such processes socially rather than innately.

Still, the Baldwin effect might be seen in social behaviors, insofar as selective pressures will favor genes that make an organism better at socially acquiring or developing that behavior. It is doubtful that such a process could eliminate the necessity of social learning entirely, especially in cases, such as communication, that are intrinsically social in their execution.

Attempts to explain the origin of human speech as a social adaptation run into several serious problems. First, human language is far more powerful than anything required by the biological need to survive. As Noam Chomsky has observed, language has the potential for an infinite number of statements, and even this observation does not do justice to the richness of language. It hardly seems credible, as some evolutionary theorists have suggested, that such an all-powerful tool would be needed in order to adapt to successive rapid changes in climactic conditions.

Much of our speech learning ability, though not the content of a determinate vocabulary or grammar, appears to be innate. Yet the Baldwin effect is not likely to help us much here, since language evolution rapidly outpaces biological evolution, as is proved by the history of language of the last few millennia, even of the last thousand years alone. There is no way that natural selection could select traits favoring the acquisition of even the simplest terms and grammatical rules, since these change so rapidly in human history.

It is perhaps conceivable that speech arose among humans as a social adaptation. However, there are many highly other social animals, and none of them, not even chimpanzees, have a conceptual language. It is possible to sustain fairly sophisticated animal societies without such an innovation, and it is difficult to envision a biological necessity peculiar to humans that would require social groups to communicate with conceptual language.

Possibly, we are not able to see a biological circumstance requiring language because we are confining ourselves to an individualist utilitarian model of evolution. Since the time of Darwin, evolutionary theory has often been interpreted in terms of classical liberal economics projected onto biology, with each organism acting to maximize its own self-interest. In recent decades, Richard Dawkins famously posited the "selfish gene" as the evolutionary unit. In this view, organisms that propagate their own genes are more likely to be replicated, so it matters not so much whether the organism itself fares well or even if it reproduces, so long as its genes are somehow propagated. Thus, we would model evolution more accurately if we considered things primarily with respect to the "self-interest" (likelihood of propagation) of each gene.

Applying the selfish gene model to social insects, it would seem that individuals who raise the offspring of their brethren are acting in terms of genetic self-interest. After all, one's brothers or sisters are likely to have similar genes to oneself, so by raising their offspring, an individual is effectively propagating its own genes.

However, Edward O. Wilson, the most eminent authority on social insects, and David Sloan Wilson have recently found that the selfish gene model is the wrong explanation for such social behavior. On the contrary, experiments have shown not only that this altruistic behavior is not consequent to genetic relatedness, but in fact the genetic relatedness is a consequence of the behavior! Therefore this "eusociality" cannot be explained in terms of an impetus to maximize the propagation of one's genes. Naturally, it cannot be explained in terms of the organism's reproductive utility, either, since the individual never gets to propagate.

An alternative explanation for altruistic social behavior is group selection, where a group of organisms is considered as an evolutionary unit. Those groups that are well integrated and organized, using social behaviors including altruism, may effectively act a single organic unit, effectively competing with other groups as units. Those groups that are better organized and more efficient will outcompete less efficient groups or lone individuals, and in the long run they will dominate the environment. Individual altruistic behavior maximizes the success of the group, and thus the individual, though the latter is helped only incidentally. Indeed, in some societies, many individuals must sacrifice themselves completely for the sake of the group, simply because that mode of society is more successful at seizing resources than other groups.

Group selection might be applicable to speech, since speech is an eminently unselfish behavior that involves sharing what you know with others. It would make little sense as an individual strategy for survival, but is a powerful aid to group survival. However, we still run into the problem that speech is much more powerful than biological exigencies alone would require. Many social animals have competition between groups, yet the groups are generally able to coexist, and none of these have ever developed a need for conceptual speech. Nonetheless, social animals are characterized by various sophisticated communication systems, and the various primates certainly share such traits. Perhaps at some time in the remote past, there was an escalating "arms race" of communication ability between hominid groups, somehow resulting in the breakthrough of truly conceptual speech, and this capability enabled homo sapiens to triumph over all his competitors. This might explain the mysterious demise of all other anthropoids that might have had anything resembling conceptual language. Naturally, this is just more evolutionary or phylogenetic speculation, and it does not deal with the hard ontological questions of reducing conceptual language to sensitive faculties, or meanings to their psychological and neurological representations.

As I discussed in Logic and Language, human language is distinguished from other forms of animal communication in part because of its ability to apply arbitrary names to things, so that the name becomes a symbol or surrogate for the thing. Animals might also communicate with "names" of a sort, but these are effectively just indices; i.e., words that point to the thing. For example, I may tell a dog, "Sit!" and the dog recognizes that sound as being associated with a gesture it ought to perform. However, the dog does not recognize the term 'sit' as representing a concept; if it could, it would be able to manipulate it in logical relationships. Modern thinkers often conceive "logic" as a manipulation of symbols, but this is only a syntactic calculus, not real logic. True logic involves the perception of necessary relationships among concepts. The rules for the manipulation of symbols in a logical calculus (e.g., mathematical logic) are constrained by the conceptual relationships this calculus is supposed to represent. Symbolic calculus is subordinate to conceptual logic, though materialists often invert this priority. The same is true of the relationship between the grammar of human language and conceptual logic.

Grammatical rules are defined in part by our perceived logical relationships among concepts. The subject-predicate relationship is defined by our perception of substance and accident, for example. In Logic and Language, we more fully explore the various types of possible grammatical forms and their a priori ontological underpinnings. Given that grammar is informed by our ability to intuit logical relationships among concepts, it is perhaps unsurprising that all human languages exhibit a basic commonality of grammatical principles. Some linguists, most notably Chomsky, have proposed a "universal grammar" underlying all language, and further argue that this grammar is learned innately or hard-wired into our brains. This theory, however, falls into the error of presuming that grammatical rules, in this case innate, are more fundamental than conceptual logic. Indeed, some would suggest that our logical categories are just accidental artifacts of our neurological architecture. If this were truly the case, we should have no confidence whatsoever in human reasoning.

If, on the other hand, we really do have the ability to apprehend true logical relationships in the realm of ideas, is it not superfluous to invoke a "universal grammar" as a fundamental cause of language? On the contrary, it would seem that these common grammatical principles are artificially created, like all other aspects of language, in order to match our intellectual intuitions. Although the notion of universal grammar is frequently advanced in a materialist context, it fails even on materialist terms, as there is no credible evolutionary (that is, utilitarian) explanation that could account for it.

While any creature capable of logic should also be capable of a corresponding grammar, it is at least hypothetically conceivable for a truly conceptual language to consist only of words, but no grammatical syntax. In such a language, each word would call to mind a particular concept, but there would be no syntactic means of representing the relationships among concepts. This would be a highly impoverished language, yet still consistent with a creature capable of intellection. Here, the language would not permit logical judgments, but would deal strictly with a priori ontological concepts.

We have made a distinction between processing symbols and interpreting them as representing concepts. Materialists would contend that interpretation itself is nothing more than higher order signal processing. This leads us into another class of errors, exemplified by the delightfully misnamed field of artificial intelligence.

Pseudo-Intelligence in Computer Science

Computer science is often portrayed as a science of information, which is fair enough, but there is a persistent confusion between the signals manipulated by computers and the interpretations that human beings impose on them. Electrical signals, as such, are not information. It is only their use that makes them so. A signal can represent information in an improper or proper sense. In an improper sense, the sense widely used in computer science, a change in the signal state can result in a change in some outcome. If I push the "up" button for the elevator, the electronics will behave differently than if I had pushed the "down" button or no button at all. Thus one might improperly say that the state of the "up" button (lit or unlit) contains a bit of information. All this really means is that the physical state of one component affects the physical state of another component; we could easily dispense with the metaphor of information.

Information in the proper sense is that which is intelligible to an intellect; it literally informs the mind, as the intelligible is the form of the intellect, or the known is the form of the knowing faculty. Electronic signals can represent information only when the resulting physical state (e.g., characters or images on a screen) represents an intelligible concept or judgment to an intelligent being. This requires an act of interpretation on the part of the intelligent being, just as when dealing with any other kind of symbols. The introduction of electronics (or any equivalent mechanism) does nothing to explain information on a conceptual interpretive level; the machine only manipulates representations.

Computer scientists often speak of electronic states or "bits" as units of information. Bits are not information in the proper sense; they are merely electrical states that may determine changes in the state of some other component, and in some cases, result in outputs that we choose to interpret as representing information. Intrinsically, however, bits are just electrical states (or pairs of possible electrical states), while their informative content depends on interpretations that we impose on the machine.

To better appreciate the distinction between an electronic bit and conceptual information, let us consider what is arguably the most basic component of a computer, the one-bit adder. I contend that the one-bit adder, considered purely as a physical entity, does not add bits, much less does it know that it is adding.

A one-bit adder is a circuit component that may take a low voltage input or a higher voltage input from each of two different channels, for a total of four possible input combinations. We can wire the circuit in such a way so that if it gets two high voltage inputs, it will give a high voltage output; if it gets two low voltage inputs, it will also give a high voltage output; and if it gets one of each kind of input, it will give a low voltage output. The circuit does what it does as a matter of physical necessity, just like any other simple circuit. It does not add voltages, since the output voltage need not be the sum of the input voltages. The input currents add up to the output current, but this feature is common to all electrical nodes (via Kirchoff's law), regardless of circuit setup, and is unrelated to the logical interpretation of the adder.

To regard this circuit as a one-bit adder involves an interpretation that we humans impose on the input and outputs. We can interpret a high-voltage input or output to be "zero" and a low voltage input or output to be "one." We further interpret the inputs to be addends and the output to be the sum. Thus we get zero plus zero equals zero, and one plus one equals zero (we could give two by making a second bit change to "one"), while zero plus one gives one. The circuit is not doing anything differently; we have simply imposed the interpretation of addition upon it, since we know that the outputs will correspond in a way that give the "right" results. However, we could take the exact same circuit and interpret it differently. We could take the low voltage to be zero and the high voltage to be one, or even two. Alternatively, we could interpret the low voltage and high voltage to be one and zero respectively for input, and the reverse for output, and we would get something other than addition. We could even take any of these novel definitions and rewire our circuit so it would again get the right results for a one-bit adder. Thus we could have two completely different circuits representing one-bit adders, since in each case we interpret the bits differently. We choose binary only because it is simplest, but we could do things differently if we chose.

These examples show that the one-bit adder is not a type of circuit, but an interpretation that we may impose on many radically diverse types of circuits. The circuit interpreted as a one-bit adder simply does what it must, like any ordinary circuit following the laws of electromagnetism. There is no intelligence or even rudiments of intelligence in anything it does, since it does not deal with information, but only with unintelligent signals that we may interpret as representing information. The computer does not do arithmetic, much less know that it is doing arithmetic or what arithmetic is. The imitation of the output of our intelligent acts of doing arithmetic is made possible not by any rudimentary intelligence of the computer, but in our cleverness in designing circuits that will invariably create outputs relative to inputs that can be interpreted as arithmetic sums. It is a simple matter to create broader arrays of these adders to perform higher arithmetic.

There is no information in the circuit itself, save in the same improper sense that there is information in written text. The ink itself has no information, but rather human beings impose meanings on these symbols, and other humans know the meanings that are intended by these symbols and thus meaning is conveyed. For a one-bit adder or more complicated calculator to do any adding, a human being must impart some input that he understands as representing numbers, and then the circuit will yield an output that is interpreted by the user numerically. This works only because of our interpretation that we impose on the inputs and outputs, and our deliberate wiring of the circuit so that the relationship between input and output reflects the logical relationships we know to exist between the addends and the sum.

Even sophisticated computers are little more than complicated arrays of one-bit adders, subtractors, switchers, and other components subject to the same analysis discussed above. We can use computers to do many other things besides arithmetic because we impose further interpretations onto simple binary bits. Some strings of bits, for example, we interpret as ASCII character codes, and instruct the computer to respond to these bits by rendering the graphical alphanumeric characters with which we are familiar. The computer does no thinking whatsoever, not even on a rudimentary level. It simply does what it must, like any other circuit, regardless of whether we interpret its output arithmetically or lexically. In fact, we count on the computer not to think, or else we could not predict its results. We need it to be a complete slave that merely implements our algorithm, upon which we impose an interpretive scheme that makes the circuit useful to us.

So-called quantum computing does not really take us beyond implementation of algorithms. Quantum indeterminacy is used only to provide additional mixed states in each bit, so a "qubit" has four possible states rather than two, allowing exponential increases in processing power with powers of four rather than powers of two. Such a tremendous increase in capability will certainly allow us to implement far more complicated and resource-heavy algorithms, but it does not take us into the realm of self-determining autonomous artificial intelligence, nor does it take us an inch closer to this chimera.

Just as a one-bit adder does not know that it is adding, even the most sophisticated chess program does not know it is playing chess. This is an interpretation that we impose on electrical activities. We have written the program so that the output will be interpreted as "King's pawn to K4." The ability of the program to simulate intelligent outputs lies not in the cleverness of the hardware or software itself, but in our ability to reduce complex activities to relatively simple algorithms, which a computer can then mindlessly apply iteratively or recursively, much more quickly and consistently than we could. It is consistent not because it is more intelligent, but because it is completely unintelligent and will never deviate from the algorithm.

Consider the famous IBM chess computer, Deep Blue. When Deep Blue beat Garry Kasparov, what was amazing to me was not that a computer beat a human, for indeed, I have lost at chess to much simpler programs (less than 1 MB!) without supposing that my IBM 386 was on the road toward achieving sentience. On the contrary, it was far more impressive that Kasparov could win some games against the machine, which was going through millions of different permutations before making a move. Obviously, no human can exhaustively go through millions of possibilities applying some complicated algorithm in the fifteen minutes allotted between moves, yet Kasparov was somehow still able to keep the match competitive. This means there was something in his insight, experience and intuition that enabled him to perform almost as well as if he had explored all these possibilities, without actually doing so. That is the mark of true genius, and far more astounding than solving a problem by the "brute force" method of plowing through millions of possibilities. To me, it is much more impressive when someone can look at a glance at some complicated problem and confidently say "this is wrong" or "this is the solution," then when they have to pedantically grind out the solution mechanically. For example, someone of superior intelligence can intuit the binomial factorization of a polynomial on inspection, rather than mechanically apply the quadratic formula or some algorithm for higher order polynomials.

Deep Blue did not know it was playing chess any more than Battle Chess does. Its program was a synthesis of multiple algorithms mindlessly applied. Indeed, it relied more overtly on human assistance than most programs. Its handlers had the option of choosing from among different strategic algorithms, and then they read the output and made the corresponding move on the board. Programs like Battle Chess, by contrast, execute algorithms without human intervention, and make their own moves on screen, so that the human player only needs to interpret the graphical output as positions of chess pieces. These self-executing programs are not performing interpretive acts, but rather such acts are made unnecessary by the algorithmization of certain functions (selection of strategy and translation into spatial representation). The potency of computing lies in our ability to simulate the outcome of intelligent ratiocination with repeatable algorithms. This works best when simulating rational activity where the logical relations among terms can be unambiguously defined, as in discrete mathematics. Then we have only to encode an algorithm that syntactically reflects these logical relations.

It is an enormous conceptual error to equate the physical states of electronic media with bits of information. When we say that a physical disk has so many bits or megabytes of storage space, this really means that it has so many possible physical states that our reader could distinguish as bits. Truth be told, a disk has infinitely many possible physical states, since its properties have continuous measure, but of course we only mean states that are enumerably distinct vis ā vis some process imposed on them that yields distinct outputs. When scientists claim that the universe is a big information processor, that is simply a philosophically inept way of saying that the universe goes through many different physical states, each of which may yield different outcomes. The metaphor of information is unnecessary and misleading.

Consider the question of how many bits there are in a storage medium. The answer we give is relative to our ability to distinguish outputs. A "bit" is an interpretative scheme we impose on physical states of electronic media, insofar as we instruct our hardware and software to yield different outputs according to the physical state. We may use 8 bits (1 byte) to code an ASCII character, because we instruct the computer to display a different character depending on which of the 256 possible physical states that particular piece of medium is in. If we wished, we could make it so that only 128 different characters were represented by these 256 possible states, with two states representing the same character. This would involve some redundancy in our coding scheme, so we would effectively only represent 4 bits of information with these same 256 states. This shows that there is some subjectivity in the concept of a "bit," which cannot be defined simply by counting possible physical states.

Let us take things further. How much information (in the improper sense) is in a 1 megabyte image? If a "bit" were an objective unit of information, we should be able to say that all 1 megabyte image files have the same amount of information. Yet there are ways to streamline our code, so that, instead of encoding the color for each individual pixel (as in a bitmap file), we use a shorthand that tells the program to repeat a color over an area, which is useful for images that have the same color over sizeable areas. This is why a GIF file can be much smaller in size than a bitmap, yet still have the same image quality. (Another approach is to use vector graphics, as in the JPEG, which can also produce images of at least comparable quality at considerable savings.) In the end, both the bitmap and the GIF represent the same image with equal quality, so they have the same information content even though they are different in size measured in bits of electronic medium. Here it is not even the case that the bits in a bitmap are outright redundant, for each encodes a distinct pixel, and there is no useless code. It is simply a matter of a better code being available, that, under certain conditions, allows us to convey the same information with fewer bits. However, if each pixel was of a different color, there would be no savings in storage space.

When different states represent the same object, they are effectively indistinguishable as far as information is concerned. Such states are no more distinct bits of information than a CD at 15 degrees Celsius is in a different information state than one at 16 degrees Celsius. We select which physical states correspond to information depending on which create different outputs.

The above analysis of "information" distinguishes information states by different outputs, yet there is more to information than output. After all, any process in nature has an effect or output, but it is not thereby informative. Following the proper sense of 'information,' a process is informative if it produces representations that convey an intelligible concept to some intellect. True information involves more than plurality of possible outputs, but consists primarily in the conceptual meanings that we understand the signals to represent. The signals have no intrinsic meaning, but we assign meanings to signals as we see fit, with as little or as much redundancy as we choose.

"Intelligence," (literally, "to read interiorly") is an essentially inward looking faculty, involving the apprehension of concepts, images of the external world, real or perceived, in one's own mind. By contrast, when modern thinkers speak of "artificial intelligence," they pretend to define the "intelligence" of an entity by its output. Yet we should not consider something intelligent if it simply mimics humans exteriorly. By calling it intelligent, we suggest that it really understands what is going on. A one-bit adder does not know it is adding, and it is hard to see by what alchemy an entire array of adders should know what they are doing. Since a computer merely processes signals, there is nothing it does that could not in principle be accomplished by people sending signals to each other (e.g., by semaphore). Even the more strenuous advocates of artificial intelligence would be hard pressed to contend that such a collective could constitute a self-subsisting "I" that knows itself.

Modern science is ill-equipped to deal with a subjective, self-subsisting "I" because of its self-imposed epistemological shackles, restricting itself to empirical phenomena. In neuroscience, this means limiting reality to the corporeal representations of thought (e.g., neural signals), rather than acknowledging the intensional meaning of thought as real. It is one thing to say that you do not know anything or wish to affirm anything about the incorporeal, but it is intellectually dishonest to pretend to have disproved the reality of the incorporeal, when you have willfully blinkered yourself to the incorporeal a priori. Many scientists do this sort of thing all the time; when dealing with natural theology or intelligent design, for example, they want to say both that we cannot know anything about God and that we can know that God does not exist or act in nature. The epistemic blindness of naturalistic science is a topic unto itself, best reserved for another work. Suffice it to note that it is like a deaf man saying there is no such thing as sound, and the proof of this is that he has got along just fine perceiving things by sight. He will never appreciate music, and those of us who do appreciate it must deny that it exists in order for the deaf to consider us rational.

For those of us who do not care to be esteemed by the deaf, and are willing to declare that thought, as distinct from its neural representation, is real, we have only to discern how its existence is to be accounted for, in light of what is known about other aspects of physical reality. There can be no slicing of the Gordian knot, but we must take reality as we find it, all of reality, including that which does not neatly fit into an Epicurean materialist philosophy.

Continue to Part III

© 2010 Daniel J. Castellano. All rights reserved. http://www.arcaneknowledge.org

Back Top