Part I
1. Dogmatic Skepticism
2. Early History of the Scientific Method
2.1 Thales of Miletus
2.2 Plato’s Mathematical Physics
2.3 Aristotle’s Rationalist Physics
2.4 Medieval Natural Philosophy
2.5 Scientific Methodology Before Galileo
3. Galileo and the New Science
4. Baconian Empiricism
5. The Victorian Myth of Objectivity
6. Scientific Method in Modern Practice
Part II
7. Falsifiability
8. Scientism in Analytic Philosophy
8.1. Physical Materialism
8.2. Biological Materialism
8.3. Ontology in General
8.4. Limitations of Materialist Anthropology
9. The Poverty of Empiricism
10. Paranormal and Supernatural Claims
11. Non-Repeatable Experience
Since the nineteenth century, increasing numbers of Western academics have gravitated toward the twin philosophical errors of scientism and positivism. By ‘scientism,’ I mean the general notion that the natural sciences have a monopoly on truth, so that all claims, including the philosophical, theological, and historical, must be in conformance with the truth established by scientific inquiry. Positivism is a more formal philosophical doctrine, formulated by Auguste Comte (1798-1857), which holds that all fields of human inquiry ought to adopt the methodology of the natural sciences, so that all knowledge is rigorously scientific. Positivism presumes scientism, since it is only because we consider scientific knowledge to be uniquely verifiable that we should demand all other human knowledge be subjected to scientific methodology. Both positivism and scientism assume that the methods of the natural sciences are uniquely equipped to help us obtain reliable knowledge.
The methodology of the natural sciences is generally characterized as ‘empirical,’ meaning it is grounded in physically observable phenomena. The idea that only empirical methods should be used in natural science, known as ‘empiricism,’ is widely assumed by nearly all scientists, as well as most other academics. In an empiricist academic culture, scientism necessarily implies that empiricism is the only methodology that leads to reliable knowledge, and positivism necessarily implies that empirical methods ought to be the exclusive methodology of all fields of inquiry. Both of these assertions are patently false, as I show in this and other essays, and as should be evident to any careful thinker. Further, it is not even true that empiricism is the sole methodology of modern science. Any attempt to limit all theories of knowledge to empiricist criteria is unjustified, and needlessly restricts the scope of human inquiry.
The effect of this self-imposed empiricist straitjacket is to promote materialistic ideologies in academic discourse, and to dismiss contrary concepts out of hand without argument, as unscientific
and therefore unverifiable. This is logically invalid circular reasoning, since empiricism, by definition, can only deal with the physically observable, that is, the material. It is begging the question to dismiss discussions of metaphysics, theology, or any other endeavor that deals with the immaterial as unverifiable simply because they cannot be verified by empirical means. One might as well complain that sound cannot be seen, or that light cannot be heard. Uncritical acceptance of empiricism as the sole valid epistemology can have a noxious effect on intellectual discourse, as entire bodies of knowledge can be dismissed without rational refutation, on the inadequate grounds that they do not admit of empirical validation.
The intellectual life of our culture is poorer for this, as contrasted with that of ages past, where men of science did not fear to philosophize, theologize or explore the paranormal and the spiritual. This broader scope of intellectual endeavor cannot be attributed to scientific ignorance, for it included Newton, Descartes and practically all the great luminaries of the Age of Reason. On the contrary, it is in the academic culture from the late nineteenth century onward that we find a deficiency, namely a profound philosophical ignorance and metaphysical naïveté, manifested in the uncritical assumption that empiricism is the only path to knowledge.
It is common for the scientifically educated (or those who admire the sciences) to posture as skeptics
or critical thinkers, who are loath to believe in anything that cannot be validated by empirical methods. This attitude can be found in numerous publications and websites that include some variation of the word ‘skeptic’ in their title. Truth be told, this sort of skepticism
is really dogmatic materialism, as should be clear from an explanation of the classical meanings of dogmatism and skepticism.
The notion of skepticism as a philosophical stance dates back to the ancient Greeks, where we find a distinction between dogmatic
and skeptic
philosophers. These terms did not have their modern connotations, where the former entails uncritical assertions while the latter refers to critical thinking. Rather, a dogmatic
philosopher was one who asserted a positive opinion or doctrine, and held that certain philosophical truths could actually be known. A skeptic,
by contrast, held that no philosophical truths were knowable, and asserted no positive theses. The philosophical activity of skeptics consisted mainly in showing the weaknesses of various assertions made by dogmatic philosophers, hence their name, which is derived from skepsis, an examining, consideration, or observation.
According to this classical distinction, practically all modern thinkers, and certainly all scientists, are dogmatists. We do not hold the position that nothing is knowable, but believe that the natural sciences can attain knowledge of the way things really are. Every science asserts a set of theses as its body of existing knowledge, upon which further development in our knowledge is to be based. While opinions may change regarding the truth of this or that thesis, all scholars in a field at a given time do believe that their research can obtain real knowledge. This is a striking departure from the ancient skeptics, who saw physical theories as idle speculation, and could discern no way to determine which of the competing theories was true. This was a reasonable position at the time, since Greek physics was highly speculative, having only a limited scope of observations at its disposal. Now, with centuries of substantive achievements in the natural sciences, this kind of skepticism is no longer sustainable.
Modern skeptics
are only selectively skeptical, scrutinizing claims that contradict their positive doctrines (Gk. dogma = opinion, decree, resolution, doctrine
) about how the physical world works. These doctrines include the so-called laws of nature
inferred from empirical research. The modern skeptic is a metaphysical naturalist, meaning he assumes that everything that happens must be explainable in terms of some natural principle. This presumes that all that exists is governed by physical laws that admit no exception. This is a philosophy of nature peculiar to the modern era. Aristotle saw natural principles as describing what usually happens, and even the early modern scientists spoke of the tendencies of bodies. It is only in the eighteenth and nineteenth century that scientific thinkers generally began to speak of natural laws as ironclad necessities, admitting no exceptions whatsoever. Without this philosophical assumption, there is no basis for the modern skeptic’s offhand dismissal of any testimony or evidence of an event that contradicts the so-called laws of nature.
Ironically, the secular skeptic’s assumptions that everything in nature must happen for a reason, and that such reasons come in the form of universal laws, are profoundly theological in nature, as they presume a unified rationality governing the universe. In the absence of a God—whose existence most skeptics either declare unknowable or deny outright—there is no reason to expect us to be so fortunate as to live in a universe where everything happens according to laws with universal applicability, or for any rational cause at all. Secular thinkers must accept this boon as a lucky chance, without which we would not be able to analyze anything that happens in nature, as the universe would consist of a hodgepodge of disparate events, without any unifying rational principles. They neglect to point out that our scientific theories consisting of universal laws arose historically because of an a priori conviction in an overarching rational Being governing the universe. Though skeptics deny or ignore theism, they uncritically retain an important relic of monotheism, the belief that a single rational order must govern the entire universe.
Concomitant with this belief is the other theistic thesis, namely that everything must happen for a reason. Skeptics make this assumption when they declare that even currently unexplained phenomena must be subordinate to some natural principle, known or unknown. They express this by saying, There must be a rational (or scientific) explanation for this.
This confidence that there is a reason for everything would be utterly misplaced in the absence of a single rational Being governing everything. The incongruity of this belief with atheism is evinced when the atheist declares, absurdly, that no reason or cause is needed to account for the universe or the natural order as a whole. Thus we are to believe that the entire universe or Nature can come into being (and be sustained in being) without any rational cause, yet we cannot admit that even the slightest phenomenon within the universe can happen without a rational explanation. This atheistic blind spot is but one of many exhibited by modern skeptics, which is why I hold they are actually dogmatists with a determinate set of doctrines.
Metaphysical naturalism is not the only doctrine characteristic of modern skeptics; they also tend to defend whatever is the perceived prevailing orthodoxy among academics or other intellectuals at a given time. For example, they will disdain any discussion of extraterrestrial visitations, without examining the quality of individual testimonies, even though there is nothing contrary to scientific law in such encounters. It simply strikes them as ridiculous or improbable, or at any rate insufficiently substantiated, yet they do not hesitate to affirm as certain that there is life on other worlds, though there is no positive empirical evidence of this. They insist without evidence that extraterrestrial life must exist, due to their bias against traditional religions which depict man as a being of singular importance, yet they refuse to take seriously the idea that aliens are visiting the earth, though this at least is supported by thousands of witnesses, many of whom are competent enough to be unlikely to err. I am not saying that alien encounters are real, only that so-called skeptics are inconsistent for rejecting these claims for insufficient evidence, when they accept the reality of extra-terrestrial life on zero positive evidence. In both cases, they are informed by their a priori assumptions about the way the world must always work, as well as their peculiar notions about what is fit for academic discussion.
The internal contradictions of modern skepticism reflect underlying problems with a strictly empiricist epistemology. The reality is that science is not driven by empiricism alone, but also by a priori theories about how the world works, as well as highly subjective judgments about what is plausible. We can see this by scrutinizing how scientific knowledge and methods actually developed historically.
It is something of an anachronism to speak of a scientific method
before the nineteenth century. For most of Western history, natural science was considered part of philosophy, and there was no single methodology that held an exclusive claim to validity. Even today, the term ‘scientific method’ is used equivocally. Sometimes it is improperly used to mean methodological naturalism, namely the assumption that we should seek natural causes for observed phenomena. More properly, it refers to the process of establishing physical theses through (1) observation, (2) hypothesis, and (3) experiment. While there was no scientific method
as such before the modern era, there was certainly much substantial scientific achievement, and we can find elements of the modern scientific method among ancient philosophers.
The first two steps, observation and hypothesis, are frequently attributed to Thales of Miletus (early 6th cent. BC), but none of the extant fragments about Thales give us much insight into the methodology behind his conjectures. Aristotle offers some observations in support of Thales’ famous thesis that the primary substance is water, but he is not certain whether Thales himself grounded his opinion in such observations. All that can conclusively be attributed to Thales, methodologically speaking, is that he formulated hypotheses or conjectures attempting to explain natural phenomena in terms of fundamental natural principles.
Even here we must be cautious. Many modern textbooks try to make Thales the founder of science in the modern sense of methodological naturalism, always seeking natural causes rather than supernatural or paranormal explanations. The ancients did not have a sharp distinction between the natural and supernatural orders; indeed, these concepts were developed much later, by medieval Christian philosophers. The Egyptian wise men
gave physical explanations of things that today would fall outside of a naturalistic ideology: e.g., events on earth were affected by how the fire in the stars was mixed; the soul was naturally immortal and passed into other bodies. The Pythagorean school was, from a modern perspective, an odd blend of mathematics and mysticism, but this distinction was not recognized by the ancients themselves. As late as the Roman era, Philo of Alexandria lamented that his brother converted to philosophy
from Judaism, as if philosophy were an alternative religious sect. Indeed, many philosophical schools of classical antiquitymost notably the Pythagoreans, the Academics, and the Stoicspresented doctrines that were recognizably theological or religious, both then and now.
In Thales himself we see evidence of this seamless mingling of the natural and supernatural, as well as the material and immaterial. Aristotle quotes Thales as saying that a lodestone has a soul because it causes movement to iron. According to Diogenes Laertius, Hippias had also asserted that Thales attributed soul to inanimate objects. It is in this sense that we should understand Thales’ enigmatic statement: All things are full of gods,
i.e., they contain eternal principles of motion. The fact that these gods
are in the universe does not imply that they are not truly divine. It was common to regard gods as being in the universe, whether they were the gods of Olympus or philosophical conceptions such as the Logos of the Stoics. Thales ignored poetic myths about the Olympian gods, seeking to explain a cosmos that was far greater than their domain, encompassing the highest heavens and antedating the Titans. Thus he used the generic term theos, which can mean god
or divinity,
as did later philosophers such as Plato and Aristotle. In this vein, we may understand the apophthegms of Thales cited by Diogenes Laertius: Of all things that are, the most ancient is God, for he is uncreated,
and, The most beautiful is the cosmos, for it is God’s workmanship.
This assertion of a divine creator is not inconsistent with Thales’ belief that all things come into being from water. The Stoics, for example, similarly believed that fire was the primary substantial principle, from which everything came into being and to which everything will return, yet they also believed in a divine order or Logos.
As a mark of the modern reader’s eagerness to find his own thoughts in ancient authors, many scholars have strangely interpreted Thales’ statement, All things are full of gods,
to mean the diametric opposite, There are no gods in anything.
This is eisegesis of the worst sort, evincing the sort of willful ignorance that prevents modern man from learning anything from the past. The notion that Thales was a philosophical materialist is contradicted by historical testimony, and if he were such a manifest denier of all divine activity, he would hardly have enjoyed the acclaim he did as the greatest of sages, given the cultural circumstances of the time. It is far more probable that Thales really meant what he said, that nature is permeated with divinity, as this is consistent with his inclination to see soul or life in various objects. We must recall, as Etienne Gilson points out in God and Philosophy, that the ancient Greek notion of divinity was much broader than our own, applicable to any immortal living being.
Throughout the pre-Socratic era, physical theories by various philosophical schools consisted of little more than speculative inferences from casual observations of nature. They may have differed with Thales over whether the primary substance was water or fire, one or several, but the methodology remained predominantly conjectural. There was as yet no systematic means of verifying which of the many theories, if any, was true. Accordingly, not a few philosophers, most notably Socrates, held that physics was useless and unknowable.
The Pythagoreans were arguably the earliest philosophers to recognize the importance of mathematics in nature, yet they applied arithmetic and geometry quite crudely to the task of physical explanation. Although their development of the abstract mathematical arts was quite sound, their attempts to establish a numerical basis for nature amounted to a sort of mysticism, assigning correspondences between numbers and physical beings, much like Hebrew Kabbalah. Though this is strange to us, it is consistent with the origins of Pythagorean mathematics, which came to the Greeks via Egyptian priests. The most ancient mathematical inquiries were infused with religious meaning, and any attempt to separate this influence is ahistorical.
Though Plato hardly ever mentions the Pythagoreans, it is hard to avoid seeing similarities with them in his philosophizing, particularly when he resorts to geometric accounts of the world. Like the Pythagoreans, Plato grounded reality in perfect abstractions, though he did not confine his Ideas to the mathematical. Nonetheless, we see a heavy emphasis on numerological and geometrical correspondences in the Timaeus, where he gives an elaborate account of the creation of the universe and its contents. Plato’s natural philosophy was heavily theological and metaphysical, to the point that physical objects served only to exemplify these deeper Ideas.
Most scientific thinkers today would pronounce against Platonism with their mouths, as it is manifestly anti-empirical, yet in practice theoreticians often espouse a sort of Platonic thinking. Mathematical constructs, algebraic or geometrical, are treated as if they have a life of their own, and our universe and its contents are but particular manifestations of them. Strangely, we posit ethereal mathematics as the solid foundation of physical theory.
The language of empiricism is so prevalent in our society, however, that it is common to speak about discoveries
in pure mathematics as we would speak of an empirical scientific discovery, notwithstanding the fact that we are now dealing with non-physical abstractions. In mathematics, there are no sensory observations of the physical world from which we draw our hypotheses, nor is there confirmation by physical experiment. Mathematics has its own self-contained method of validation by logical demonstration, rendering empiricism unnecessary. Even methodological naturalism is useless here, since we are not dealing with natural (i.e., physical) objects. Psychologically speaking, it may be the case that our mathematical abstractions (e.g., lines, planes, cubes) are derived from our observations of physical objects, but the formal relations among mathematical objects have no logical dependence on the state of affairs in the physical world.
Although mathematics has no need of empiricism, by the same token it cannot suffice to give us knowledge of the physical world, precisely because it is independent of the physical world. We need to know which mathematical formalisms can be applied where, and this can only be known by observing the physical world. Yet we do not seek merely to construct a catalogue of all the physical data in the cosmos, but to fit it into a rationally intelligible structure, i.e., a theory. It was really Aristotle who would first put the study of nature on a solid rational foundation, positing all the basic elements of empiricism, yet emphasizing the role of logical reasoning in the development of physical theory.
All three elements of the scientific method—observation, hypothesis and experiment—can be found in Aristotle, but he did not make this method his primary mode of analysis. Rather, the deductive demonstration, or syllogistic argument, was the primary means of acquiring truly philosophical
(i.e., scientific) understanding. This is because Aristotle considered knowledge to be genuinely scientific only when it can explain how determinate phenomena result from more fundamental principles; there is nothing scientific in just giving descriptions of facts without explanation. This belief characterizes modern science as well: for example, scientists will not regard a claim to have achieved cold fusion as scientific unless the claimants can give a plausible explanation of the mechanism or process by which this is achieved, in terms of known physical theory. It will not suffice to say, We don’t know how it works, but it works.
Note that this epistemology is patently non-empirical, since we are rejecting an experimental result on the grounds that it is not explainable in terms of accepted physical principles. Indeed, logical and mathematical demonstration from first principles remains an important aspect of the development of physical theory; this is what gives our understanding of nature a causal or rational structure.
Still, as Aristotle acknowledged in the Posterior Analytics, the premises from which we make our demonstrations often are arrived at through induction, and an induction can only begin with sensory observation. By observing many particular objects, we may gain an intuition of a universal species or genus, that is, a type of substance abstracted from accidental determinations.
The mental construction of a universal is not a demonstration, but an abstraction. I see many different creatures with similar characteristics, and I define them all to be feline.
I may even decide upon a set of features that would identify any future creature I encounter as feline or not. While I am free to make this definition, it becomes problematic to make inferences about all felines on the basis of those few felines I have observed. No matter how many creatures I observe, this will be an infinitesimal quantity compared with the infinite number of potential felines. This is the problem of induction, namely that knowledge of many individuals need not give knowledge of the universal, yet only the universal is the subject of rational demonstration and physical theorizing.
Recognizing the limitations of inductive reasoning, Aristotle understood that hypotheses about universals derived from finite observations were not proven, so he required that such hypotheses be tested against future observations. This testing is what we would call experimentation, yet it did not figure prominently in the Aristotelian approach to scientific development. Although Aristotle himself was a diligent and meticulous observer, especially in biology, and he held that all knowledge ultimately comes through the senses, nonetheless in practice most of his scientific corpus relies heavily on abstract ratiocination. This approach merits some explanation.
What we take for granted as intuitive logic or rationality—so much so that it is invisible to us, and we omit it from our definition of scientific method—was developed in large part by Aristotle, arguing in a climate where the rules of logical demonstration were not universally agreed upon or held as self-evident. Indeed, he devotes a few dozen lines of the Posterior Analytics arguing against those who believed in the validity of circular demonstration. In an intellectual culture where there were countless competing theories and assertions with no uniform basis of criticism, a systematic formal logic must have seemed a godsend, bringing order and structure to scientific inquiry. It is no small thing to distinguish valid inferences from bad ones, even if we cannot yet agree on which premises are true. What is more, the very structure of logical argument, with premises linked in relationships of dependence, is a perfect image of what Aristotle held philosophical knowledge should be: explaining nature in terms of more fundamental or primary principles.
Aristotle distinguished a philosopher’s understanding from the accidental
knowledge of a sophist by maintaining that a philosopher knows the cause on which the fact depends, as the cause of that fact and of no other, and further, that the fact, could not be other than it is.
In other words, scientific understanding depends on knowing the more fundamental principle on which a fact depends as a matter of necessity. Most strikingly, he insists that the object of philosophical knowledge cannot be other than what it is,
imposing a requirement of logical necessity on scientific knowledge. Lest it be thought I am erroneously applying a thesis from one of Aristotle’s logical works (the Posterior Analytics) to his physics, I should point out that he repeatedly uses examples from physics and mathematics in his arguments, and that his Physics, as is well known, relies heavily on abstract logical demonstrations.
Surely Aristotle is demanding too much of a physical theory to require logical necessity of it. The Philosopher freely confesses that not all knowledge is so demonstrable; rather, each science demonstrates its theses based on the indemonstrable first principles (i.e., axioms) particular to it. For Aristotle, the correct methodology of science is to begin with sound first principles, and then to work your way down to increasingly specific applications through demonstration. This works well for an a priori science like mathematics or ontology, but it is the exact opposite of our presently preferred method in the physical sciences. The modern empiricist holds, reasonably enough, that we should begin with observations of determinate phenomena, and then work our way upward to more general causal explanations, as we construct a larger body of measurements.
Aristotle posits two methods by which a science can expand, both of which involve adding extreme terms to a rational demonstration. The first method has the form A is predicated of B, B of C, C of D, etc.
For example, the thesis, An animal (B) is living (A),
is followed by, A biped (C) is an animal (B),
and then, Man (D) is a biped (C).
This approach, though perfectly logical, is the exact opposite of the procedure of a science grounded in determinate observations. We should begin with observations of particular things, and then classify them into useful categories, and then organize these into higher categories, insofar as we can find common dynamic principles in them. Aristotle's approach, by contrast, has the fatal weakness in that it relies heavily on the correct choice of first principles from the beginning, rather than working our way cautiously toward a knowledge of first principles from what is more directly intelligible to us through observation.
A demonstrative science can also expand laterally instead of top-down, as when a major term (A) is applicable to two minor terms (C and E). The example given (Posterior Analytics, I, 12) is that the indeterminate term number
(A) may be shown to be predicable of a determinate odd number (B), and then of any particular odd number (C). In a lateral expansion, we can show that number
(A) is predicable of a determinate even number (D), and then of any particular even number (E). Interestingly, in the example of lateral expansion, Aristotle includes some bottom-up inferences, when we go from proving something about a determinate odd (or even number), and expand its applicability to any particular odd (or even) number. Even demonstrative science allows for bottom-up exploration, yet Aristotle did not emphasize this, because he was fixated on the importance of establishing sciences on sound first principles and rational demonstrations therefrom, in order to end the chaos of pre-Socratic philosophy.
The Stagirite naturally recognized that there were dangers of demonstrating
falsely, due to the falsity of one’s first principles, or various errors in one’s assessment of the categorical relationships among entities. Such errors would eventually result in the empirical contradiction of Aristotle’s physics, to be discussed more extensively in another work. Here I will note only that the empirical falsification of certain elements of Aristotelian physics does not excuse us of the responsibility to ground our theories in an ontologically coherent structure.
A well-founded science, Aristotle claims, must be grounded in necessary premises; that is, premises which are true always and everywhere. This is a gratuitous requirement, informed by the ancient pagan philosophical conviction that everything in nature occurs by necessity. For a non-fatalist, it is far from obvious that the actual natural order is the only possible natural order, in which case the fundamental principles of physics are not necessary. It is possible that, for an a posteriori science such as physics, Aristotle intended necessity only in a weaker sense, namely that the first principles of physics in fact do apply everywhere and always in the cosmos, but it did not have to be that way. However, Aristotle did not distinguish a priori and a posteriori as we do, before and after observing existence as it actually is. He indiscriminately regards both mathematics and physics as speculative disciplines that work through dialectical reasoning. It is likely, then, that he really intended logically necessary principles to be the ideal basis of physics.
This is not to say that Aristotle understood no nuance in the different kinds of dependence possible among physical entities. One distinction of his that is especially useful in our analysis of modern science is that between essential and accidental dependence. The examples he gives (Posterior Analytics, I, 4) are: (1) A beast dies when its throat is being cut; and (2) while a man was walking, lightning struck. In the first case, the predication is essential because the connection is consequential; that is, the beast died because of or by virtue of the cutting of its throat. In the second case, the predication is accidental, since the lightning was not due to the man’s walking or vice versa; the connection is one of coincidence only.
We may amplify this distinction with further nuance. For example, consider a man walking while there is daylight. The daylight as such does not cause him to walk, but nonetheless the connection between his walking and daylight are not purely coincidental. A man is more likely to choose to walk when it is light, because he has judged it is safer and more fruitful to do so then, instead of at night. Thus there can be some limited causality even in non-essential relationships of dependence.
The distinction between essential and accidental dependence is today articulated by the scientific aphorism: Correlation does not prove causality.
Aristotle gives an especially strong form of this statement, saying that even if a predication is true in every instance,
this does not establish an essential
relationship. Science, for him, is concerned only with essential relationships, so it is not enough to show that a certain attribute always inheres in a certain kind of subject. It must do so necessarily or essentially. Aristotle’s understanding of causal, necessary, or essential dependence differs markedly from that of modern symbolic logic, like that of Russell. In modern logic, it suffices to show that a predicate is attributable in every subject in order to establish necessity. This is why symbolic logic, with Russell’s material implication,
is of little help in establishing physical causation, and why Russell, like Hume, thought it might be best to dispense with the notion of causality in science. Yet modern scientists, almost to a man, all believe that their research brings genuine knowledge of real causes.
Scientific explanation, for the Peripatetic, consists in explaining natural phenomena in terms of more primary principles. A nature
(physis) is a principle of motion, so the study of nature—physics—examines the principles by which things change. In order for a demonstrative proof to be well founded, it must be grounded in necessary premises, that is, premises which are true always and everywhere. Notwithstanding this emphasis on logical necessity, theses about physical relationships need not have universal applicability. Aristotle repeatedly characterizes what is natural
as that which happens usually or always.
Natural operations may admit of exceptions; nature may err or fail to achieve its end in particular instances. We may recognize such errors in biology, for example, yet nearly all modern scientists would agree that fundamental laws of physics admit no exceptions whatsoever. This is the aforementioned belief characteristic of metaphysical naturalism, which is generally assumed without philosophical justification.
Not only does Aristotle allow that natural causation may admit exceptions, but he even permits causes that are not physical. Behind nature is a set of first principles
which governs all reality, not just physics. This philosophy of first principles, discussed in Aristotle's books after the Physics
(ta meta ta physika) is what we call metaphysics, a term we now interpret as meaning beyond physics
or transcending it. When we appreciate that the natural order is not a matter of absolute necessity, but could conceivably have been otherwise, it is obviously meaningful to search for principles that are prior to even the most fundamental principles within the existing natural order. Intriguingly, Aristotle sought such principles even though he considered the natural order to be necessary, since he realized that there were more generic principles than those of physics. It is the object of first philosophy
to explore these most generic principles, which have applicability to all of the various sciences, each of which deals with one or another genus of entities. The most fundamental science deals with being as such. Today we might call this ontology or metaphysics, but Aristotle calls it theology, since it is obvious that if the divine is present anywhere, it is present in things of this sort.
(Metaphysics, Bk. VI, c.1)
Aristotle identified three theoretical sciences: physics, mathematics, and theology. If his discussion of physics seems overly dependent on abstract reasoning, we must keep in mind that he is considering only the deep theoretical basis of natural phenomena, not a merely practical or functionalistic description of nature. Even modern theoretical physicists do little empirical work, but devise elaborate theories mainly using the internal logic of mathematical constructs, without repeated recourse to experimentation at every step. Frequently, theoretical developments outpace experimentation, but at the end of the day we feel that we must verify a theory empirically to make sure we have not gone astray in our abstract reasoning. Aristotle effectively attempted to dispense with this need for experimentation by grounding physics in logically necessary principles.
The Aristotelian intellectual ethos of explaining natural phenomena in terms of increasingly fundamental, universal principles leads inexorably to a sort of monotheism, as ultimately everything must be referred to the most fundamental, perfectly universal principle. This is why Aristotle did not fear to call the first philosophy ‘theology’, and why other logically minded philosophers—notably the Academics and the Stoics—also pointed toward a single Deity or Divinity that transcended the Olympian pantheon. This philosophical theology did not abolish conventional religion, for it was traditionally acknowledged that the Olympians were children of the Titans, who in turn were born of Heaven and Earth (Uranus and Gaea). The God of first philosophy transcended even Heaven and Earth (which together formed the cosmos), as is clear from the sayings of various philosophers from Thales onward.
It is a persistent myth that development of the sciences ceased or stagnated in Christendom after the fall of the Rome. It is true that much of Western Europe languished in barbarism for several centuries, and it was not until the eleventh century that it would see a return to high culture. Still, classical learning was continually preserved and cultivated in the Eastern Roman Empire, which was devoutly Christian, contrary to the falsified post-Enlightenment histories that portrayed Christianity as opposed to higher learning. It was from the Byzantines that Greek learning would spread to Islamic culture, and from there the Aristotelian corpus would enter the West in the twelfth century.
Contrary to modern characterizations, medieval thinkers did not slavishly worship the ancient theories of Plato, Aristotle, and Galen, but made important modifications and improvements, openly contradicting ancient Greco-Roman science at times. Byzantine medicine improved upon Galenic medicine, contradicting several of Galen’s views, and Muslim doctors made further improvements that were incorporated into European medicine during the Renaissance. Archimedean statics, which was used in the engineering of simple machines, often contradicted the physics of Aristotle. Sophisticated mathematics was needed to construct the Hagia Sophia, which was a significant advance over classical architecture. These advances in learning culminated in the Byzantine humanist (i.e., study of the humanities) movement of the thirteenth and fourteenth centuries. If much of ancient learning was still revered, it was because it still held up on rational grounds. It is not the case that Greek learning died or stagnated in the medieval period, though it became concentrated in the East after the collapse of the Western Roman Empire.
Lest it should be said that the Latin Christians were dullards, they did more with the classical-Byzantine-Islamic tradition than anyone else, and in a span of three centuries (13th-15th) they equaled and surpassed their competitors. The existing structure of Western medieval society provided suitable conditions for such cultural flowering, once it was given the seeds of classical learning. The supposedly obscurantist Catholic Church was in fact the primary patron of the arts, sciences, and higher education, a role it would continue to hold until the nineteenth century.
In philosophy, the Parisian scholastics, most notably St. Thomas Aquinas, corrected the Averroist interpretations of Aristotle, and developed important metaphysical and physical distinctions allowing for a more exact analysis of philosophical problems. Bl. John Duns Scotus was the most nuanced in this regard. The richness and technical precision of Scholastic philosophy makes much of modern philosophy seem crude and amateurish by comparison.
The practical arts developed so rapidly in late medieval Europe that we may accurately speak of an industrial revolution
in the thirteenth century. This revolution encompassed advances not only in industrial and agricultural crafts, but also in artistic, literary, and philosophical endeavors, making it a much broader flourishing than the modern industrial revolutions of the late eighteenth and late nineteenth centuries, which were mainly technological in scope.
From the thirteenth century onward, Latin philosophers attempted to implement Aristotle’s intellectual program of grounding all human knowledge in rational demonstration from first principles. They practiced a sort of Aristotelian scientism, trying to make everything, including philosophy and theology, scientific in the rationalist Aristotelian sense. Far from being anti-science,
they may have been excessive in their ambition to make all knowledge scientific (or philosophical,
as would have been said), so that the Christian humanist movement of the fourteenth to sixteenth centuries was in part an open reaction against this endeavor.
Nonetheless, Aristotle was not blindly worshiped by the Latin philosophers. At the supremely influential University of Paris, Siger of Brabant (c. 1240-1284) instructed, It should be noted by those who undertake to comment upon the books of the Philosopher that his opinion is not to be concealed, even though it be contrary to the truth.
Archimedean and Ptolemaic theories, which contradicted Aristotle on many points, were taught side by side with the Aristotelian corpus, and the countless medieval commentaries on the classics evince significant departures from their sources.
The Latins made enormous progress in the mathematical and physical sciences from the thirteenth to the fifteenth century, following a generally Aristotelian paradigm. Jean Buridan (c. 1295-1358) developed his own physics independent of Aristotle, and Nicole Oresme (c. 1320-1382) combined this with Mertonian mathematics. By the fifteenth century these developments had become incorporated into the Scholastic tradition, so the Aristotelians
of Galileo’s day were far from being slavish followers of Aristotle in natural science.
Before the Galilean revolution in science, theories of natural philosophy were developed primarily by abstract ratiocination. Mathematical modeling played an important role in astronomy and mechanics, but these mathematical or mixed
sciences were believed to give only a computationally useful description of nature, not a true explanation of physical causes.
For example, the Ptolemaic model of astronomy contradicted Aristotle’s theory, since Ptolemy used eccentrics (circles with different centers), while Aristotle had homocentric spheres. Aristotle’s theory was deducible from a set of intuitive physical first principles, so it was considered to be the truly scientific or philosophical
account of the cosmos, while Ptolemy’s system was just a useful computational tool. The same was true of the epicycles
used in Ptolemy’s theory. Scholastics who used this computational device did not really believe that the planets moved in epicycles, for that would require breaking through the crystalline spheres of the Aristotelian system. It was just a calculating tool, as was the use of trigonometric tables to determine planetary positions. In this intellectual environment, it was only natural for most Scholastics to regard the Copernican model as just another mathematically elegant tool that was not a truly physical account of the cosmos. They knew from experience that a mathematical theory could save the appearances
(i.e., account for all observed measurements) without explaining physical reality.
This distinction between mere mathematics
and a truly philosophical (i.e., scientific) understanding of physics was grounded in sound reasoning. It is not enough to give an accurate quantitative model of a phenomenon in order to demonstrate that this a true account of physical reality, since you do not necessarily know that this is the only possible model that would save the appearances.
The Scholastics knew from their own practice that it was possible for highly distinct, geometrically contradictory mathematical models to account for the same physical facts. Copernicanism was not appreciably more accurate in its predictions than the Ptolemaic devices in use, and even if it were, it would still lack a cogent physical explanation of heliocentrism.
Experimentation did not play a prominent role in natural science prior to Galileo, for a similar epistemological reason. The confirmation of a hypothesis (suppositio) by an experimental result or observation of nature would not suffice to demonstrate the truth of that hypothesis. The various mathematical models used in astronomy could all be said to be confirmed
by observations, yet they could not all be true physical accounts of the cosmos. (We may add that the ancient Egyptians were able to perform accurate astronomical calculations assuming a flat earth.) In order for a hypothesis to be accepted as true, it must be demonstrated from first principles, not merely confirmed by experiment or observation.
This is not to say that Scholastic philosophers of nature completely disdained observation in favor of ratiocination. On the contrary, the Scholastics followed Aristotle in grounding physical knowledge in observation, giving it precedence over abstract argument. It is in this sense that we should understand the Scholastic aphorism: contra factum argumentum non est; against a fact there is no argument.
Moreover, there were fertile grounds for experimentation in the engineering sciences, which enjoyed rapid development from the thirteenth century onward. The theory of simple machines, known as Archimedean statics, sometimes contradicted Aristotelian natural philosophy, but these mathematical sciences were permitted to develop unimpeded, though they were held to have only practical rather than theoretical significance. In the fourteenth century, the Mertonian Scholastics developed a highly mathematical analysis of natural philosophy, including the concepts of velocity and acceleration that would be essential to Galilean dynamics.
We can appreciate why Galileo’s heliocentric theses were rejected by Scholastic philosophers, even prior to his trial before the Roman Inquisition (indeed, the ecclesiastical sanction was pursued by his academic enemies). Galileo could not demonstrate his theory from accepted first principles, but on the contrary he sought to overthrow the established principles of Aristotelian natural philosophy.
Even today, it is not clear how strong an experimental contradiction has to be before we will overturn a generally accepted theory. If only one data point contradicts the dominant theory, we call that point an outlier or a measurement error. If it is one experiment, we seek independent confirmation. If there are multiple studies contradicting an established theory, then we try to incorporate these new results into the existing theoretical structure. If this is not possible, we modify our existing theoretical structure as little as possible, trying to leave the most fundamental principles untouched.
This academic conservatism makes the development of scientific theory highly dependent on the peculiar developments of intellectual history. Since we try to fit new discoveries into existing theories with minimal alteration of fundamental principles, our modified theories depend on the historical accident that the existing theory, now known to be inadequate, happened to have been proposed first. Conversely, the acceptance of a new theory is frequently the result of the older scientists dying off. This sort of development is not scientific, but sociological, and gives us little confidence that currently popular theories are more correct simply because they are more recent.
Aristotelian science was especially resistant to change because of its theoretical methodology, not simply because of general academic inertia. While Aristotle recognized that primary existence belongs to individual objects and not to Platonic essences, he nonetheless seemed to think that once we have formally defined the essence of a thing, it is fully explained and there is nothing more to say. For this reason, Aristotelian science in practice was highly resistant to new observations that might cause one to re-examine the classification of natural objects. The world would have to await Galileo’s nuova scienza, which analyzed the physical world dynamically rather than in terms of static essences, in order for the full potential of the experimental method to be realized.
As a master logician, Aristotle clearly apprehended that induction from particulars is insufficient to establish universal truths, an evident fact that is routinely neglected by modern science. Scientists today simply make a philosophical assumption that nature must be governed by universal laws, so any large sample of observations suffices to establish a law. Recognizing the insufficiency of inductive proof in physics, Aristotle sought to establish physical theory on deductions from first principles. This was not out of contempt for induction as such—indeed, the whole induction versus deduction
debate is a purely modern bugaboo—but a logical result of Aristotle’s conception of the essence of scientific knowledge, which is explanatory rather than descriptive. Even today, the development of physical theory relies heavily on mathematical deductions from first principles; the difference is that our first principles are inferred from experimental induction, which we also use to confirm or refute our theory’s logical consequences.
The medieval Scholastics, from St. Albertus Magnus onward, understood that physical necessity is not absolute, but conditional. We must make some underlying metaphysical or physical hypothesis or supposition (suppositio, conditio) in order for a physical effect to occur necessarily. To speak in modern terms, the laws of physics are not logical or mathematical tautologies, so certain conditions must hold in order for them to strictly determine outcomes. In other words, the natural order could be other than it actually is, so it is not absolutely necessary. When we say something happens out of physical necessity (determinism), this is really a suppositional necessity.
Albertus Magnus was the first Scholastic to draw this conclusion out of Aristotelian physics, and he used this principle to explain how there could be exceptions to rules in nature. Since there is no absolute necessity in physical causation, we cannot absolutely generalize All crows are black.
This would constitute David Hume’s causal problem of induction,
often discussed even today. Yet for Albertus Magnus, no such problem existed, since he did not expect absolute necessity in nature, only suppositional necessity. He understood perfectly that crows could occasionally be other colors. Rather, the physical thesis, All crows are black,
should be understood as carrying the supposition that certain characteristics of the species would be propagated in an act of generation. He then tried to explain in what circumstances an exception would occur, that is, when the supposition would not hold. [W.A. Wallace, Albertus Magnus on Suppositional Necessity in Natural Sciences,
Albertus Magnus and the sciences: commemorative essay 1980, v.49, pp.103-128.] We see some semblance of this idea already in Aristotle, who said that what nature intends can sometimes be frustrated by an opposing circumstance. Scholastic Aristotelianism did not face the causal problem of induction
created by the ideological strong determinism of modern science, because the Scholastics correctly perceived that physical necessity is not absolute.
Since physical necessity is suppositional in nature, an appropriate form of argument for physical theorizing is the argument ex suppositione, first articulated by St. Thomas Aquinas. Such an argument is of the form: Given the supposition P, then Q is impossible (since Q directly contradicts P).
For example, If I am standing, then it is impossible for me to be sitting.
This does not mean it is absolutely impossible for me to sit, only suppositionally so. Once it is observed that I am standing, no further investigation needs to be made as to whether I am sitting, as it is known for certain that I am not (since that would contradict the observed fact of me standing). The argument ex suppositione may also be given positively, Given the supposition P, then Q is necessarily true (since P implies Q),
where ‘necessarily’ refers to suppositional necessity, not absolute necessity. As an example, Given that I am speaking, I must be alive,
for it would be impossible for me to speak if I were dead. There may be non-living things that speak, but as long as it is impossible for me to speak if not alive, then the observed fact of my speaking proves that I am alive, without need for further investigation.
As ex suppositione reasoning was refined in the late sixteenth century for use in natural philosophy, the contingent proposition P was an observed phenomenon or appearance (as in the examples above), while Q was the necessary condition to the causation of P. As a simple example, Given that a flower grows here, there must have once been a seed in the soil.
This is a valid inference if the original presence of a seed is a necessary condition for the growth of a flower. If so, observing the flower is proof that the seed was once there, even if the seed was never observed.
Arguing ex suppositione could lead to certain knowledge if Q were the only possible way of obtaining P. In that case, the argument ex suppositione is expressible in the form of modus ponendo ponens: P. If P then Q. Therefore Q.
We can achieve this certainty only if we eliminate all other possible causes, but in practice this is rarely possible through unaided speculative reasoning. Notwithstanding this problem, ex suppositione argumentation was considered a valid way to obtain certain scientific knowledge in the Aristotelian sense.
The Scholastic argumentum ex suppositione was similar in logical form to modern ex hypothesi reasoning (indeed, suppositio translates Aristotle’s hypothesis), but the modern version reverses the content of the terms. In the late Scholastic version, P is an observed phenomenon and Q is its necessary cause, but in the modern version, P is a hypothetical cause and Q is an observed phenomenon that could be explained by P. In most descriptions of the modern scientific method, it is claimed that we confirm or verify a hypothesis by making some observation consistent with our proposed causal explanation. Such an argument can give at best probable knowledge, for as a strict deduction it gives us the fallacy of affirming the consequent: If P, then Q. Q. Therefore P.
For example, If the earth rotates daily, then we should perceive the sun rising and setting each day. We do see the sun rise and set each day, therefore the earth rotates daily.
This argument is fallacious insofar as it falsely supposes that no other causes could account for the same phenomenon. It is subject, then, to the same limitations as classical ex suppositione reasoning.
The mathematician and astronomer Christopher Clavius (d. 1612), head of the Jesuit Collegio Romano, invoked a special type of ex suppositione argument called ex sufficienti partium enumeratione when treating the problem of the physical reality of the Ptolemaic system using the vast amount of data gathered in the sixteenth century. He argued that there were only three ways to account for astronomical observations: (1) Ptolemaic eccentrics and epicycles; (2) Aristotelian homocentrics; and (3) a fluid heaven (where the bodies move freely like fishes in the sea). By showing that the latter two possibilities were false, it followed that the Ptolemaic system not only saved the appearances, but its mathematical constructs described physical reality. This kind of ex suppositione reasoning purports to divide the range of logically possible explanation into parts, and then to eliminate all but one explanation. In this way, a theory that best fits observations may also be shown to be suppositionally necessary. [See Thomas A.S. Haddad, Christoph Clavius, S.J. on the reality of Ptolemaic cosmology: Ex Suppositione reasoning and the problem of (dis)continuity of early modern natural philosophy,
Organon (2009) 41:195-204.]
However, even ex suppositione reasoning as elaborate as Clavius’ was subject to theoretical assumptions. Clavius acknowledged that the Copernican system saved the appearances just as well as the Ptolemaic system, yet he did not count it as a theoretical possibility, since it blatantly contradicted theoretical physics of the time. He noted that philosophers held that a simple body could have only one motion, while a Copernican earth would need to have three motions. Further, the idea that the sun rather than the earth was at the center of the firmament was contradicted by the common consent of philosophers and astronomers, as well as Sacred Scripture. Even the most clever ex suppositione reasoning is only as strong as its theoretical assumptions, which define the range of admissible possibilities.
Early modern scientists were distinct from their predecessors in that they did not simply observe nature as they found it, but designed experiments that would isolate phenomena or aspects of phenomena they wished to measure. The controlled experiment, which contrasts the behavior of an object with the change of a single variable, keeping all other properties fixed in the comparison or control case, is an important development that would allow for an expansion in physical science. For centuries, man was limited to observing whatever events nature happened to produce, but now he could create his own events (experiments) that would give new insights into physical operations. Medieval technological developments in optics and metallurgy made possible new observations, and as physical science became better understood, better instruments could be developed, allowing more classes of observations. It is not that modern scientists were more astute or intelligent than their predecessors, but they now had many more observations to work with, as well as a means of analyzing observations to yield general physical principles.
In order for these new observations to bear fruit, however, it was necessary to set aside the calcified Aristotelian concept of physical science. In the Latin West, the writings of Aristotle were treated as demonstrations of physical truths, rather than as a metaphysical analysis of putative physical truths that were subject to empirical verification. This misconstruction is probably attributable to the Arab philosopher Averroes (Ibn Rushd), through whose commentaries the Aristotelian corpus was rediscovered by Western Europeans. Under this mentality, any new physical observations had to be interpreted in a manner consistent with the demonstrated truths of Aristotelian physics. Although Aristotle’s assertions were not all accepted uncritically, late medieval natural philosophy retained the conviction that scientific truth must be obtained through abstract demonstration. It was this attitude of natural philosophers, more so than any theological objections, that generated the most resistance against Galileo’s mechanical and astronomical theories.
Galileo was not the first astronomer to rely on meticulous, systematic observations to derive mathematical models. In this he was preceded by Copernicus, Brahe, Kepler, and many others, including the Jesuit astronomers at the Collegio Romano who verified his observations. Where Galileo departed from his predecessors was in his insistence that mathematical models contradicting Aristotelian physics were not merely calculating tools, but gave demonstrably true knowledge of physical reality.
Galileo’s scientific epistemology was not one of pure empiricism, though he certainly placed much more emphasis on sensory observations than did the Scholastic natural philosophers of his day. His explanations of observed phenomena made extensive use of thought experiments and a priori mathematical reasoning. Like Kepler, he seemed to think a theory was preferable on account of its geometrical or mathematical elegance. He believed in his physico-mathematical explanations to a fault, not giving due deference to the possibility that other mathematical models could account for the same phenomena, and that quantitative descriptions alone do not give physical causation. This philosophical imprecision would have fateful consequences in later Western science, especially in the twentieth century, where the line between mathematical modeling and physical theorizing is frequently erased.
Galileo, like his contemporaries, believed that scientific knowledge should have the force of demonstration, and that such demonstration could be had through ex suppositione reasoning. Unlike Clavius, he dared to subject Aristotelian physics itself to a test against astronomical observation, and found that it failed to save the appearances. He hoped that by eliminating the other possibilities in a formal ex suppositione argument, he would convince others that Copernicanism was not a mere mathematical device.
In an important innovation for scientific argument, Galileo used mathematical propositions and reversed the order of premises in the ex suppositione argument, giving us what we would now call ex hypothesi reasoning. P is a mathematical model that would account for some quantitative phenomenon Q. The measurement of Q proves
P according to the standards of the time, which accepted ex suppositione reasoning as demonstrative.
Yet Galileo’s refusal to accept any premises of natural philosophy undermined the demonstrative power of his own ex suppositione (or rather, ex hypothesi) arguments, since such reasoning requires some a priori suppositions about the range of physical possibilities. Absent any such supposition, the argument is logically fallacious, as we have no way of knowing that there are no other theories that might explain the phenomena just as well.
Most scientists in the Age of Reason, and indeed all of the great theoreticians of the modern era, have, like Galileo, relied on a combination of empirical research, intuitive guesses, thought experiments, appeals to mathematical elegance and a parsimony of physical principles in the development of scientific theories. We will examine these methods a bit more closely later, but first we should scrutinize the position of strict empiricism, as it was ably articulated by Sir Francis Bacon.
The idea that empiricism is the only epistemology fit for science can be found in its purest form in the thought of Sir Francis Bacon. By examining this extreme position, which is not held by any modern scientist, since it would make the development of scientific theory impracticable, we can perceive more clearly the epistemological limitations of empiricism.
Bacon regarded the Scholastic learning of his day as contentious
quibbling over trifles, aimed more at winning arguments than learning anything of substance. He thought intellectual talent was wasted in such fruitless endeavor. The proper way to expand knowledge was to observe the world.
Bacon formulated what would become the popular Enlightenment idea of progress.
In this view, human history consistently progresses onward and upward, in a virtuous cycle where new knowledge yields technical discovery, which in turn enables the acquisition of more knowledge. Bacon was undoubtedly correct in this assessment as far as technology is concerned, but in moral matters it is far more difficult to discern progress, whether we are speaking of politics, religion, or private morality. These other topics are beyond the scope of this essay, so it suffices to note that Bacon’s assertion is true with regard to the rapport between physical science and technology.
Bacon’s intellectual endeavor, however, was much wider ranging than that of physical science. His New Organon was intended to replace the entire Aristotelian corpus. Instead of using logical argumentation, he would aim for a different kind of knowledge, one that would yield more practical benefits for humanity. In Bacon’s scheme, natural philosophy would be an elevated field of knowledge, encompassing everything that is possible, while history would be a mere subdomain of philosophy, consisting of everything that actually happened. (Broadest of all is poesy: everything that is conceivable.) Baconianism is not merely a scientific epistemology, but it is arguably the earliest attempt at scientism,
i.e., making everything subordinate to natural science. Empiricism and scientism were practically wedded at birth. Only with time could a more sober, moderate empiricism develop, though even today, we find scientists making overly grandiose claims about the applicability of empiricism.
It should be understood that Baconian empiricism is a properly philosophical (i.e., epistemological and metaphysical) endeavor, not scientific. It was inspired in part by Epicurean philosophy, and Bacon even uses the Epicurean term ‘idol’ to refer to a source of misunderstanding. Bacon famously identified several groups of idols.
First, there are the idols of the tribe,
which are common to all humans. These include (1) the senses, which can be deceived, so Bacon prescribes using instruments and methods to correct for this, though this does not eliminate our ultimate dependence on the senses for knowledge. Also, humans tend to (2) discern more order than is actually there. We find similitude where there is only singularity, regularity where there is only randomness. Here, what Bacon describes as a defect is the very nature of our intellect, which is to discern order. This idea that the realities of our understanding are somehow less real
than the objective
reality out there
is question begging. If we perceive an intelligible similarity, is not that similarity thereby real on some level? Bacon, before Locke and long before Kant, would seem to locate reality in the ineffable thing-in-itself, as contrasted with our mental constructions of reality. Yet there can be no knowledge without mind, so any epistemology that severs the mind from reality makes knowledge of truth impossible. Again, even the Baconian empiricist must make use of the flawed mind, just as he must depend on flawed senses.
Another idol of the tribe is (3) wishful thinking, which is the tendency to believe what we would prefer to be true. However, Baconians are susceptible to an opposing fallacy, namely that because something is good or preferred, it is less likely or too good
to be true. Lastly, Bacon admonishes against (4) premature judgments. Instead, we should gradually accumulate evidence before pronouncing on a matter. This would favor the method of induction rather than cavalier confidence in a priori deduction.
Next, Bacon identifies the idols of the cave,
which may vary by culture. These include allegiance to a discipline or theory, or esteem for certain authorities, such as Aristotle. These idols persuade us to interpret phenomena in terms of our own narrow training or discipline. We can see such behavior not only among the Aristotelian natural philosophers, but also in the Scholastic-Humanist debate during the Renaissance, which was in many respects a dispute about disciplinary domains. However, this critique may apply even to modern scientists, who tend to see the entire world in terms of their own particular discipline. This could be an even more acute problem today, due to specialization. As an innocuous example, biologists tend to see all cosmological development under the metaphor of evolution, though very diverse processes are at work. Computer scientists may see everything as signal processing, while physicists may see everything as particle or field mechanics.
Bacon also identifies idols of the marketplace,
which arise from men associating with each other. These include the language, technical jargon, and discourse of certain disciplines. Another supposed idol is names of things that do not exist,
such as the crystalline spheres. However, it is far from evident that naming is truly the problem, rather than the theory behind the name.
Bacon also finds fault with misleading names for things that do exist, such as abstract qualities and value terms (e.g., ‘moist,’ ‘useful’). He evidently considers it misleading to name things phenomenologically, but this is a valid criticism only if it is possible to know things as they really are. Baconian skepticism, like many forms of modern skepticism, is at once distrustful and naïve. On the one hand, it attempts to exhibit intellectual superiority by distrusting all our faculties and even our categories of thought, yet on the other hand it holds that we can obtain certain knowledge of reality through no less fallible faculties and linguistic objects. If we are to take Baconian skepticism seriously, we should at least be candid enough to admit that all human science is tentative and doubtful, with an unprovable correspondence to truth. Yet in fact we find that so-called skeptics are extraordinarily boastful about the supposed truths of science,
though it is merely human wisdom.
Lastly, Bacon identifies the idols of the theatre,
by way of disparaging those systems of philosophy he disfavors. There are three kinds of philosophy that come under his criticism. First, there is sophistical philosophy,
which is based on only a few casual observations, while the rest is constructed from abstract argument and speculation. This may have been a valid criticism of Scholastic natural philosophy, but it is hardly applicable to metaphysics, which necessarily deals with principles behind observed reality. ‘Sophistical’ really means playing with words, but the abstract philosophy Bacon resented often had real intelligible concepts behind its terms, so it is wrong to dismiss it as mere wordplay.
Bacon also disdains what he calls empirical philosophy,
as distinct from empiricism. Here a philosophical system is built from one key insight or observation that is used to explain everything. We may see this in the pre-Socratic attempts at natural cosmology, while Bacon offered the example of Gilbert using magnetism to explain everything.
Lastly, Bacon condemns superstitious philosophy,
by which he means the mixing of theology and philosophy. Here he includes Pythagoras and Plato, as well as those who invoked Scripture against heliocentrism. It is not clear if Bacon is saying that all theology is intrinsically superstitious, though the insinuation is a precursor of Enlightenment attitudes.
Given Bacon’s negative view of the bulk of Western intellectual history, which he appears to dismiss on the basis of a priori aesthetic preferences, it is remarkable that he does not collapse into total skepticism. On the contrary, he advances a positive program for the development of human knowledge in a way that presumably avoids the various idols
he has identified. The central tool of his epistemology is the method of induction.
Baconian induction differs not only from syllogistic, deductive logic, but also from the classic induction of logicians. Classic induction goes from sense and particulars up to most general propositions, and then works backwards to intermediate propositions. For example, from a few observations, I infer that all stars are shiny,
and then deduce that all southern stars are shiny.
Bacon characterizes logicians using classic induction as invoking very few observations in order to arrive at a very broad generalization.
However, this is not classical induction as defined in the Prior Analytics. Aristotle emphasized that it is necessary to enumerate all particular cases in order to obtain an inductive proof. Suppose that some property A belongs to every particular object in a set C. Further suppose that another property B inheres in every particular object in C, and that B has no wider extension than C. That is, there are no objects outside of C that exemplify B. Then it is logically necessary that A belongs to B, meaning that every object that exemplifies B must also exemplify A. The logical force of this argument cannot be denied, but in practice it is usually difficult or impossible to enumerate all particular objects of a type.
Bacon recognizes this difficulty, realizing that any generalization (A is B) produced by induction is vulnerable to falsification if there is even one particular observation rendering the intermediate premise (B is C) false. To minimize this danger, Bacon suggests building up from specific to more general axioms, rather than deducing specific axioms from the more general. In this way, we should say only that objects on earth can fall at constant acceleration, rather than assume this is universal. Similarly, Aristotelian physics assumed everything fell toward center of earth because that is what we observed locally. To then use this universal principle to deduce intermediate principles propagates error.
Bacon’s scrupulousness places impossibly severe limits on scientific generalizing. We would have to examine absolutely every single type of star in heaven before we could say that all stars are shiny. Theoretical claims are much more modest in Baconian science, but they tend to be more stable. Newton, by contrast, would overreach, claiming a universality for his principles that would not hold on galactic scales. The lesson of Newtonianism shows that even centuries of verification on a broad scale is no guarantee of universal applicability.
The problem with Baconian induction is that this method could never lead to any general propositions, much less to universal statements. When is it logically valid to leap from finitely many observed particulars to abstract generalizations? How many observations shall suffice? A thousand? A million? Or every single possible instance? It seems to be a matter of subjective judgment how much data we should amass before we dare to make a generalization. Perhaps we could do a probabilistic analysis, yet statistical tests only eliminate the likelihood that our results are the product of chance. They do not tell us how probable it is that our theory is a correct physical explanation of reality (as opposed to a mathematical description). In order to assess that probability, we would need some assumptions about what is the range of a priori physical possibility, just as Clavius limited discussion to those astronomical theories that were consistent with the accepted physical principles of his time.
It is far from evident that finitely many observations can give us any insight into universal natural laws. Our past failures should give us pause before assuming that our current universal
theories (e.g., relativity and quantum mechanics) can admit absolutely no exception, and are unlimited in scope. On the assumption that there are finitely many objects in the universe, we may perhaps obtain a strong likelihood of universality if we can show that our data is a large and representative sampling of the whole. The idea of random sampling presumes a degree of homogeneity in the natural order, so it is a weaker version of the assumption that there must be a universal cosmic order. Instead of assuming outright that there are universal laws, we assume that natural objects are more or less evenly distributed on a cosmic scale, so that consistency across many measurements of distant objects is indicative of a law that applies to all objects of that kind.
Harvey famously observed that Bacon wrote of natural philosophy as a legislator rather than a practitioner, for it is not realistic to practice science according to the Baconian ideal. In fact, Kepler, Galileo, and Harvey used a more improvisational method. Only Brahe, with his encyclopedic, detailed observations seems to have been a true Baconian. Even Darwin, who claimed to be a Baconian, is best known for his theorizing about a general principle of natural selection, rather than his exhaustive catalogues of observations as a naturalist.
Science in fact often proceeds by intuitive and imaginative leaps. Take Kepler’s heartfelt intuition about harmony in celestial bodies, or Einstein’s conviction that the laws of electrodynamics, and therefore the speed of light, must be the same in every inertial reference frame. If we are not to be trapped forever in our current theoretical frameworks, we must be willing to take leaps of fancy. If we limit ourselves to analytic thinking, we can only verify or falsify existing theory, but never arrive at anything new. We must be willing to hazard a guess, and then see if our guess is verifiable. Scientists have a more sober-sounding term for this: hypothesis-driven science.
The only thing that makes it superior to flights of fancy is that a scientific guess (hypothesis) must be verifiable by observation, at least in principle, if not with the actual state of our measurement instruments.
In the early nineteenth century, most English scientists took Baconian ideology very seriously, and they often took care to present their findings as formal inductive proof. Anyone who has read these ponderous arguments can see that the argument structure is no less artificial than that of a Scholastic pedant who insists in framing all deductions in syllogistic form. One can hardly escape the impression that even the English Baconians actually arrived at their findings by more intuitive, improvisational methods, and then, in Procrustean fashion, force-fit everything into an inductive proof.
Baconian proofs were certainly not valid in a strict logical sense, since they did not enumerate all particular cases. Nor did they have any probabilistic validity, as analysis based on random sampling had not yet been developed. It was a matter of subjective judgment to determine how much data sufficed to establish a universal principle. Yet there was a further problem with Baconianism. Even granting that it could give us universal descriptive principles with a high degree of probability, how could it possibly give us any causation involving general causative principles?
As every modern science student knows, no degree of correlation, no matter how strong, suffices of itself to establish causation. The latter requires some theoretical principles about how the world works. We must deal with universals qua universals if we are to give an explanatory rather than descriptive account of the natural world. This is in fact how every major theoretical development in science has occurred. Galileo used mathematical reasoning and deductive thought-experiment to come up with his principle of uniform acceleration of falling objects. Harvey also combined mathematics and deductive logic to determine a priori that blood must circulate. Empirical evidence to establish method of circulation only came afterward.
Scientists throughout history have relied on hints and guesses—hypotheses—that were later corroborated or falsified. Bacon underestimated the role of hypothesis and overestimated the value of meticulous observation. While real scientists appreciated both these aspects of scientific method, science education was seriously hampered by Baconian ideas into the early twentieth century. Students were given tedious measurement tasks (e.g., to find the density of a block of wood, they were asked first to measure each dimension three times and take the average of each), so that even apt pupils should identify science with boredom and drudgery, rather than awe and insight. Scientists themselves do little to dissolve this misconception, when they try to posture as objective, dispassionate, analytic creatures, free from poesy, which even Bacon recognized as the highest science.
Natural philosophy was not commonly called ‘science’ in English until the nineteenth century. With the change in terminology there also came a change in conception of the social role of the scientist. Instead of pursuing lofty wisdom expressed in elaborate theoretical systems, the English scientist doggedly sought less glamorous knowledge, based on tedious and dispassionate observations of the natural world. The nineteenth century was the high water mark of English Baconianism, and the literature of the period is reflective of this just the facts
attitude, as we find that dry analysis, studiously avoiding the attribution of intensive meaning, which is so characteristic of the period.
This Baconian emphasis on the phenomenal over the noumenal naturally led to a materialist ethos, even among those who were not properly philosophical materialists. When it is considered unscientific to speculate beyond the observed sensory data, we are left with little choice but to talk about everything in materialist terms. Whereas natural philosophers from Thales to Descartes were genuinely concerned with, not the mere appearances of nature, but what it all means, modern scientists increasingly shrugged off any semblance of philosophy, in order not to succumb to the ‘idol of the theatre.’ With this supposed philosophical neutrality, one could let the facts speak for themselves, without injecting personal or systemic bias.
Scientists of this period exhibited conscious affectations intended to demonstrate their professional impartiality. Scholarly literature became filled with overly dry language, even where ordinary language would have sufficed just as well. Anyone who has suffered through the palpable pretentiousness of nineteenth-century English monographs understands this sentiment all too well. The contrast with the more literarily vibrant Continent is remarkable. Another affectation, besides choice of language, was a deliberate tendency to speak of human beings and animals as though they were mere things, subjecting everything to the same analysis as inanimate objects. This would sometimes be taken to the point of being deliberately insensitive to other people’s sensibilities. Some even took noticeable delight in toppling idols,
that is, showing disregard for cultural norms in the name of an unfailing devotion to scientific impartiality.
This deliberately inhuman conception of the scientist was in part motivated by an admiration of the great technological developments of the last half of the nineteenth century, which ushered in the age of machines. The scientist, like a machine, should be able to act upon and organize nature dispassionately, without passing any subjective judgment. It was in the Victorian era that this ideal received its name: ‘objectivity.’ The notion that scientists can or ought to be objective
persists in some form even to this day, so we should take some time to address this pretension.
Before proceeding, I should note that this nineteenth-century caricature of the scientist is rarely emulated by actual scientists today, except partially. It is the pedant who tends to overcompensate, and so we are more likely to see imitations of the Victorian ideal in the least rigorous of the sciences, such as the social sciences.
In order to prove that they are real scientists, they will introduce needlessly dry jargon, and talk about human beings as though they were not humans themselves. The worst offenders are the science enthusiasts,
mere amateurs who pretend to debunk
ideologies they abhor or scientifically prove
their favored ideology. Those who see science only from the outside are more likely to think that arrogance is proof of intelligence or that behaving like a soulless jerk is proof of dispassionate objectivity and love of truth. This socially dysfunctional behavior is especially common among Anglophones, where the ideal of the scientist as a dry, unfeeling machine (at least in appearance) is more widely espoused. It is no accident that the two most beloved characters on Star Trek are unemotional non-humans. This pathological separation of logic
and emotion
is actually held up as a psychological ideal. I will not address the mental health issues associated with the ideal of objectivity, but instead will examine only whether it is philosophically feasible.
The most obvious problem is that human beings, as rational persons, cannot be mere objects
when acting rationally. Indeed, intellectual and volitional activity necessarily requires us to act as subjects. On the other hand, it might be said that our passive intellect only receives whatever perception is impressed upon it, and it is the duty of the scientist not to add anything to that perception volitionally. This is to think of the human mind as a camera, simply receiving raw sensory input without discursive interpretation.
However, even sensory perception ceases to be passive the instant it is contemplated by a rational human subject. Modern experiments with ambiguous images or optical illusions have shown that our sensory observations are generally theory-laden; that is, they presume a determinate physical interpretation of the sensory data. We instantly make judgments, in the very act of contemplating a perceptible, about which splotches of color correspond to distinct objects, as well as judgments of perspective, distance and orientation. We cannot simply see
anything with our intellects unless we just mindlessly describe color patterns without pretending to identify objects. All observations are necessarily interpreted, and these interpretations involve assumptions about how the world around us works.
Furthermore, the scientist would be nothing but a cataloguer of facts if he did nothing with his perceptible after making the observation. Scientists are expected to do analysis, yet this mental analysis necessarily involves linguistic concepts, which in turn imply metaphysical judgments if they are presumed to correspond to the actual world. If it is said that the scientist’s analysis is purely mathematical, I must respond that that is impossible if he is measuring or counting physical objects. To impose mathematical analysis on the physical world, we must select physical objects or entities to be categorized for counting or measurement. Our mathematical expressions, if they are to have physical application, must be interpreted as representing physical entities. Our identification of such entities, be they substances or properties, requires a degree of conceptualization, which in turn involves implicit metaphysical theses. Data interpretation, therefore, is invariably theory-laden, even if it is quantitative rather than qualitative.
It may be contended that I am arguing against a man of straw, for scientists cannot really believe that a rational subject may become literally objective. Surely, the term ‘objective’ in ordinary scientific usage is intended only as a synonym for ‘impartial’ or ‘dispassionate.’ Properly speaking, we act subjectively when we perceive and interpret data, but we do so in a way such that we do not show preference or partiality to one outcome over another, and we do not allow our emotions to interfere with our rational judgment.
Let us treat these two meanings separately. First, for a scientist to be impartial
in his data interpretation would entail that he has no preference for one outcome of the analysis over another. Yet can we really say that a scientist does not prefer that his hypothesis should be confirmed? Anyone who has been socially involved with scientists knows how they frequently become invested in their pet hypothesis, so that if it should be falsified much of their work would be undone, or they would be less acclaimed. Yet this partiality does not necessarily prevent them from being good scientists. They will graciously withdraw their hypothesis if it has been clearly falsified, but if the evidence is ambiguous, they will interpret it in a way most favorable to their preferred outcome.
The ideal of the dispassionate or unemotional scientist is also far removed from reality. I do not refer simply to the obvious fact that all human beings have emotions, but assert further that scientists are emotional even when acting as scientists. We have already mentioned that scientists routinely show partiality in their interpretations of data, and we need only add that this often becomes an emotional investment. Of course, they believe they are just being passionate about the truth, but everyone thinks the truth is on his side (if only the irascible science enthusiast could understand this). Yet scientists acting emotionally is not some unfortunate aberration, but an unavoidable mode of human consciousness.
Now, I am here using the term ‘emotion’ in a rather broad sense, to encompass all non-rational appetites in the human psyche, not just those stronger impulses that the Stoics called ‘emotions,’ which roughly correspond to the neurochemically induced feelings of sadness, fear, hedonic desire and pleasure. It is possible and indeed commendable for the scientist to set aside strong animal emotion when conducting rational inquiry. However, he does not thereby become truly dispassionate, as there remain other non-rational appetites or preferences that may direct his thoughts. Such appetites help determine which questions we will ask, and how we will investigate them, and how we prefer to interpret the data. The role of preference in the interpretation of data is unavoidable, except in cases where only one interpretation is possible.
Even in a case where there is only one logically possible interpretation of the data, the scientist still enjoys the use of discretionary judgment. He could choose not to follow the path of logic, and simply ignore that chain of reasoning and not see the conclusion. The only thing he cannot do is think a contradiction. Non-rational preference is an essential component of all human investigations.
The presence of non-rational preference in scientific activity does not invalidate scientific conclusions. In fact, it is a practically necessary instrument in the conduct of human investigations, at once enabling us to explore beyond merely logical consequences and serving as a seal of the freedom of our intellect. It is precisely because we are not compelled by the data, or anything else, to think one way rather than another that our judgments can have the force of considered and independent thought. It is a good thing that we are guided by non-rational preference, or else we could hardly arrive at any knowledge beyond analytic judgments.
Why should we go to the bother of proving something as obvious as the non-objectivity of scientists? Surely, those with even slight philosophical literacy will not stumble into the error of believing objectivity to be attainable. Still, even those who know better sometimes seem to forget what they know, and hold up objectivity as an ideal. It is my contention that objectivity is not only unattainable, but undesirable, as it can be achieved only by renouncing our free will, thereby undermining the legitimacy of our findings.
Scientific method is valued only because it regularly leads to true results. If it did not, it would not be widely used and we would choose some other method. Some may argue that it leads only to useful results, as if science were simply concerned with engineering and technology, but anyone who has interacted with actual scientists knows that they are mostly concerned with knowledge, even when the practical value of that knowledge is not evident, or in some cases non-existent. Modern philosophers may agonize over the existence of truth, but scientists assume it as a matter of fact, and it is the primary motive of their enterprise, whether they use the term ‘truth’ or ‘knowledge.’
Since the scientific method is valued because of its ability to lead to truth or knowledge, it would invert means and ends to deride any argument that attempts to arrive at truth without the scientific method. The criticism that something is not scientific
has no weight unless it is accompanied by evidence or argument that the particular non-scientific claim is invalid or erroneous. Restricting validity to the scientific method is logically untenable, as there are many other possible roads to knowledge, such as direct intuition, sensation, a priori reasoning, and first-hand testimony. These other ways are not infallible, but neither is the scientific method. The scientific method is a superior form of inquiry only in certain circumstances, which we will attempt to define.
The real success of experimental science was grounded not only in the method itself, but in ingenious applications of the method by scientists to learn the mechanical laws of nature, and to exploit those same laws to generate technologies that allow more precise observations. Much of technological progress does not rely on experimental science, though it requires basic knowledge of the laws of mechanics and electrodynamics. Much of it is based on simple trial and error and intuition, often without detailed understanding of the physical principles being exploited. Some of the greatest inventors and engineers have had limited education in the physical sciences.
Theoretical physics does not overtly rely on the scientific method, though it submits its hypotheses to be confirmed by observation and experiment. It uses abstract mathematical ratiocination extensively, and scientists are as confident of mathematical derivations as they are of experimental results, showing that, even within science, the so-called scientific method is not the only path to knowledge. Furthermore, studies of the actual practice of experimentalists show that the scientific method is not applied in a rigorous three-step process, but all three processes of observation, hypothesizing, and testing can occur simultaneously or in mixed order. The scientific method is more of a conceptual model than a practical rule of action.
Other refinements of the scientific method include the controlled test and the double-blind test. The controlled test runs two parallel experiments, with and without a certain factor hypothesized to be the cause of a phenomenon. If the phenomenon occurs in the control
experiment, lacking the factor supposed to cause the phenomenon, that would falsify the hypothesis. If the phenomenon occurs only with the factor, but not in the control experiment, that would validate the claim, but not prove it. This method gives rise to the association of falsifiability
with the scientific method, as we shall discuss later.
A blind test
is a refinement of a controlled test, ensuring the impartiality of the experimenter by conducting the experiment in such a way that the persons recording the results of the experiment and analyzing the data do not know which data set corresponds to the control condition or any other condition. A double-blind
test is where human subjects are used in an experiment, and the subjects are not told whether they are being subjected to the control condition or some other condition. Both the experimenters and the subjects are blind to the condition, hence the term ‘double-blind.’ In the case of clinical trials of drugs, subjects may be given white pills, only some of which contain the drug, the rest being inert placebos.
This way the subjects have no way of knowing whether or not they received the drug. Blinding the subjects to the condition eliminates any change in their behavior or health that might result from knowledge of the condition.
Additionally, a trial is said to be randomized
when the subjects are assigned to the various conditions or control groups at random, eliminating selection bias on the part of the experimenter. This is especially important in clinical studies where there may be genetic effects on a subject’s reaction to a drug.
More generally, we note that the scientific method is necessarily grounded in observations of the natural world. These are generally held to be sensory observations, so the scientific method restricts itself to world of sensible objects. This methodological restriction is a self-imposed limitation on a field of study, so it may not be invoked as a condemnation of the validity of methods of inquiry that do not follow this restriction. To do so would be to transform the scientific method into a philosophical assertion about the theory of knowledge. As we have noted, a priori reasoning can be an effective road to knowledge, providing its own intrinsic form of validation (rules of logic), rather than requiring validation through sensory observation. If a priori reasoning is applied to the natural world, and is contradicted by observation, that means either the reasoning was faulty or that it was based on a false premise. If that false premise was grounded in the observation of some fact, this is really a posteriori reasoning. Although a priori reasoning, rightly employed, can stand on its own, in practice, men often reason badly, especially on subtle matters, so it is useful to refer to external facts (in the sensory or non-sensory worlds) as a check. As the medieval scholastic philosophers said, contra factum argumentum non est, attesting that they did not think they could abolish facts with arguments, contrary to later stereotype.
The method of testing a hypothesis by controlled experiment has given rise to the association of the scientific method with inductive reasoning. By inductive reasoning, we mean the inference that a hypothesis is more likely to be true as it is validated in multiple instances across an increasing variety of circumstances. Inductive reasoning can never prove a hypothesis is true, but only that it is valid and consistent with all known observations. When the volume of observation is extremely large and spans a broad range of circumstances, the truth of the hypothesis is regarded as a practical certainty, as the possibility of a contradictory observation becomes vanishingly small. In fact, when a physical fact or law becomes widely accepted, the occasional data point that appears to contradict this law is dismissed as an outlier
or statistical or systematic error, rather than allow that the law can be violated in freakish instances. This practice shows that scientists believe in truth and that they gradually accumulate real knowledge of the practical world. They also have a confidence in the inviolability of physical laws that exceeds what can be proven by the inductive method.
In fact, the natural sciences do not rely on the inductive method alone, but use deductive reasoning extensively, especially in mathematical physics. Conviction in the necessity of logical consistency persuades most physicists that the laws of physics are inviolable in their applicable domains, though quantum mechanics has made clear that many of these laws apply only in the statistical limit of large ensembles of particles, making their violability infinitesimal on a cosmological scale, though not identically zero. Even if a physical law could be mathematically proven to be absolutely inviolable, lest all theoretical physics be thrown into incoherence, it must be remembered that these mathematical laws express the relation between physical quantities mediated by physical agents. It does not follow that a non-physical agent could not interact with the system, thereby creating phenomena that seem to defy nature, if the external agent is not considered. One possible agent that is not adequately accounted for by the laws of physics is agency by humans and other animals, which move bodies contrary to the way they would behave as inanimate matter. The idea that animal behavior is reducible to the physics of inanimate objects is a gross assumption, and arguably contrary to reason. In any event, the gap between inanimate and animate matter has not been rigorously bridged, so it is arguing beyond the facts to assert that human behavior is reducible to the laws of mechanics and electrodynamics. (Attempts to reduce consciousness to a quantum effect are problematic for reasons of scale.) As a practical matter, consciousness is effectively outside the laws of physics since we are ignorant of their connection, so we have no way of calculating what a conscious being will do. Thus animal activity is best treated as an external agent acting on inaminate matter (including that of its own body). Whatever the real case may be regarding consciousness, the point stands that proofs of the inviolability of laws of physics do not preclude interventions by external agents for which these laws have not accounted.
© 2012 Daniel J. Castellano. All rights reserved. http://www.arcaneknowledge.org
Home | Top |