1. Physics as Philosophy
2. The Basic Problem of Change
3. Natural Principles of Change
4. Abstract versus Natural Forms
5. Explanatory Factors in Physics
6. Chance or Randomness
9. Mover and Moved
10. The Infinite
11. Place and Space
12. The Void
That the study of nature is essential to philosophy should be evident from the fact that the first philosophers, starting with Thales of Miletus, were also the first physicists. Physics is concerned with understanding and explaining the world we observe, so it is eminently philosophical. For most of its history, theoretical physics was considered part of philosophy, and was called “natural philosophy” as recently as the nineteenth century. Yet for most of the modern era, we have conceived of “physical science” as something distinct from philosophy, with the latter being more abstract and qualitative, while the former deals with quantification of the empirical. Still, the scientist’s mania for objectivity does not erase his concern for meaning, and if physics is something more than mere quantitative description, it retains philosophical aspects even today.
Keeping in mind that distinctions among the sciences are at least partly a matter of arbitrary choice, we may find useful this working definition of physics: the science that seeks explanations of changes in the sensible world.
Seeking explanations is essential to science, as mere description does not deepen our understanding. When we explain something, it must be in terms of some principle other than itself, or else we are just indulging in circular reasoning. It will not do to say “The sky is blue because it has blueness,” as the latter expression is just a verbal rearrangement of the former, adding nothing substantive to our understanding.
Physics deals with changes, as the Greek term physis suggests change or motion. Thus even Aristotelian physics is essentially non-static, being concerned primarily with accounting for change.
We confine physics to the study of changes in the sensible world. Such changes inarguably exist, if we give are to give any credence whatsoever to the senses. Physics presupposes that at least some knowledge is attainable via the senses. Further, insofar as it is confined to explaining changes in the sensible world, all inquiries must begin with some sensory observation.
The distinction between physics and metaphysics may be drawn in at least two ways. First, physics deals with changes in the sensible world, while metaphysics deals with the suprasensible world. Second, metaphysics deals with first principles that may account not only for the sensible world we observe, but also other possible physical worlds. Neither of these distinctions is as sharp as they may seem. Explanations of sensible changes may eventually lead us into a realm that cannot be sensed directly. The study of physics might even lead us toward first principles, or at least impose constraints on what these first principles may be like. We should not, then, insist on an absolute barrier between physics and metaphysics, as one elides into the other. Still, we shall try to confine our discussion to what is necessary to account for changes in the sensible world.
In this overview of basic issues in natural philosophy, I will mostly follow the order of Aristotle’s Physics, without confining us to its content. This is because the Physics does a good job of at least addressing the fundamental problems, even if later developments lead us to different solutions. Importantly, we are informed by a different notion of science than that of the Aristotelians. We do not insist on deductions from self-evident axioms, but instead allow the results of observation and experiment to inform our suppositions. Unlike most modern scientists, however, we do not dismiss philosophical problems as irrelevant to physics, but consider them to be what makes physics most worthy of study.
The ancient Greek philosophers Zeno and Parmenides had raised some seemingly unanswerable logical paradoxes regarding sensible change, so that they regarded change as an illusion, while in reality everything remained what it already was. Heraclitus, by contrast, adopted the view that nothing persists, so that the only reality is process or change itself. Both opinions attempted to solve the problem of how something that “is X” can become that which “is not X.” On the one hand, “being X” cannot come from “not being X,” for, as all natural philosophers agree, nothing comes from nothing as such. Yet if “being X” came from “being X,” then it was already X to begin with, and did not really change. So we must either dispense with the reality of change or with the notion of substantive, persistent beings.
Aristotle brought two important elements to the discussion: (1) a firm belief in the sensible world as a source of knowledge, and (2) a recognition that “being” admits of more than one sense or mode. The first conviction is shown in his criticism that to be a follower of Parmenides is to refuse to look at the world. After all, if we cannot know anything at all from the senses, not even the reality of change, then there is no basis for inquiry into natural philosophy. We cannot reason about physical explanations if there is no credible data to explain.
Inquiries into what it means “to be X,” which we call ontology, made possible Aristotle’s solution to the problem of change. In contrast with those who assumed that “to be” is univocal, Aristotle considered that there may be multiple senses or modes or being. He even considered the possibility, held by some contemporaries, that the notion of “being” might be dispensed with, as a mere grammatical formality that adds nothing to our account of reality. It is far from true, then, that he was naively led by linguistic considerations to treat “being” as a thing.
Confusion arises because Aristotle uses the term ousia (“be-ness”) to refer to something substantial, what he calls a primary being or subject. This term may be translated “being” in the sense of “something that is/exists,” as in “a human being,” or “a living being.” It does not refer to the act of existing or being, assuming it is proper to speak of this as an act. Aristotle did not articulate a clear distinction between being (essence) and existence, and we reserve such discussion for metaphysics. Still, it is clear that his term ousia does not involve treating being or existence as something distinct from the thing-that-is. Accordingly, he is not guilty of reifying the existential verb.
Aristotle’s solution to the problem of change is to identify two aspects of a being, a persistent aspect and an aspect that changes. He called the persistent aspect hyle (lit., “wood”), to signify material in the most generic sense, and the changed aspect morphe (lit., “shape”), to signify configuration in the most generic sense. Modern philosophers call this hylomorphism, which may give the misleading impression that this is some idiosyncratic opinion of Aristotle. In fact, anyone who holds that there are both persistent and changeable aspects of a being is effectively a “hylomorphist,” even if he uses different terms. The terms used by Aristotle were intended to be figurative, and he by no means confined himself to the limits of these figures. Hyle need not be extensive corporeal matter, but could be any substantive, persistent aspect of a being. Morphe need not be restricted to the spatial arrangement of corporeal matter, but can encompass all qualitative, quantitative, and relational properties of a being.
When we say something “becomes” or “changes into” an X, we are effectively presupposing three things. First, there is the hyle or persistent aspect of a being. Without anything persisting, we could not truly say that anything is the subject of change. Rather, one cluster of properties is replaced by another cluster of properties. Then we would be stuck with the problem of how something can arise out of its negative. Either it already was X, in which case it did not become X, or it arose from not-X qua privation of X, which would be as impossible as coming out of non-being as such. This paradox is avoided only by recognizing that a form (e.g., a quality or property) arises from its absence or privation (“non-X”) not by virtue of its privation, which would be absurd, but by virtue of the potential for change in a being’s persistent aspect (hyle). In other words, the basis of mutability is in the capacity of substantial being to take different forms. By virtue of this capacity, the old form (privation) is changed into the new form.
To modern eyes, this may seem to be a purely verbal account of change that explains nothing. The apparent explanatory weakness arises from our expectation that physics should provide definite accounts of determinate physical changes. This is a highly generic account of change, so only the vaguest explanation is possible. Still, this genericness is also an asset, since it is applicable to every sort of change in the sensible world. We should therefore look for persistent and changed aspects of a being in every particular change.
Since physics is the study of changes in the sensible world, it is only consistent that a “nature” (physis) should be considered as a principle of such change. Aristotle does not naively assume that any object named by humans corresponds to a natural object. We may distinguish natural objects from artificial constructs by observing that the former contain a principle of sensible change or stability. A bed, for example, is not a natural object, since it contains no innate impulse to change qua bed. Whatever natural impetuses it may have are based on its qualities as wood, or mass, or whatever other natural substance that may constitute it.
A principle of sensible change, called a “nature,” is that which is the source of such change. When a substance contains such a source within itself, it is said to “have a nature.” This does not mean that “a nature” is some concrete substance or sensible quality of an object, nor should we assume that it occupies a determinate place. When we say that a substance’s source of change is “within itself,” we are not identifying a spatial location, but merely state that the source of change is intrinsic to the object’s constitution.
We observe that there are many objects that “have a nature,” i.e., are able to induce sensible change by virtue of how they are constituted. This is obvious in biology, where living things grow toward determinate forms and produce others of their own kind, though these processes still allow for some accidental variation. The fact that a foal grows into a horse and not a camel, and begets other horses rather than camels, is surely dependent on the intrinsic constitution of the foal, at least in part. There may also be incidental factors in ontogeny and reproduction that result in variations, but this does not abolish the relative regularity of results which is certainly a product of some common constitution among foals. To deny that there are objects with natural principles is to take one’s eyes away from nature, and to abandon any hope of scientific explanation.
Natural principles are by no means confined to biology. Even supposedly inert materials contain principles of nature. Heavy objects fall toward the center of the earth, while chemical substances can react and produce qualitative changes only in certain ways, depending on their determinate constitution. A “change” can be quantitative increase or decrease, qualitative alteration, local motion, generation or corruption (i.e., gain or loss of some form). Further, Aristotle allows that a “nature” might even be a principle of stability, or resistance to change. The Newtonian principle of “inertia,” resistance to change in velocity, is an example of such a nature.
A question arises as to whether “nature” comes from the material or formal aspect of a substance. Some of the ancients, much like modern reductionists, proposed that the only “natures” were those of the material elements, and that all other “natures” we seem to discern are really just affections, states or dispositions of elemental natures. In other words, the natural activities of a living organism are really reducible to the activities proper to its material elements. The organism, as a natural agent, is nothing more than a complex state of interacting molecules. We may go further and say that the “nature” of a chemical compound is nothing more than the “natures” of fundamental particles, so that the compound is just a configuration of such particles. While there is much evidence strongly suggesting the reducibility of chemistry to physics and biology to chemistry, the mathematical complexity of such systems has precluded a definitive proof of reductionism. In physical chemistry, only the hydrogen atom’s wavefunction admits an analytical solution. In biochemistry, systems are far too complex and subtle to prove that energy is perfectly conserved and that there are no holistic aspects to organic motion.
The materialist reductionism described above is not truly material, but formal. The “nature” of a fundamental particle is defined not in virtue of its matter (i.e., being some definite “this thing”), but in virtue of its form (i.e., being some kind of thing, with some definite properties). Without deciding the question of whether the macroscopic “natures” we discern are really just configurations of microscopic natural activity, we may in either case recognize that “nature” is in form, rather than matter. If modern scientists fail to recognize this, it is because what they call “matter” (or “mass-energy,” or “wave-particles”) is really substance, encompassing both its material and formal aspects. Recall that the matter-form distinction does not imply a real separability of principles. In the sensible world, there is no matter without form or form without matter. Still, the distinction in principles is necessary to account for the real permanence and mutability that accompany every natural change.
A principle of change is a principle of “becoming some X,” where X is a form, confirming the appropriateness of identifying nature with form. When we say something “becomes X,” X is the final form or state, not the mere potential or starting point. A principle of change is more completely realized in the final form X, which is why we should correlate this principle with the form toward which a change tends, rather than its relatively amorphous beginning. Here Aristotle has in mind biological development. The “nature” of an organism is more properly defined by its mature form than by its seemingly amorphous seed. We may say that the more mature form is somehow contained in the seed, namely, as a molecular program that can generate the mature form.
A common criticism of the discussion of natural forms is that this confuses formal abstraction with natural reality. Yet this criticism itself confuses Aristotelianism with Platonism. Aristotle was highly aware of the distinction between formalism and nature, a distinction which seemed to have escaped Plato in the Timaeus. A natural object does not have the same kind of reality as a mathematical abstraction, yet at the same time mathematical forms are highly relevant to determinations of natural reality. Any account of natural forms ought to clarify the relationship between physics and formal thought systems such as mathematics.
Mathematicians, though they ponder attributes that can be held by physical objects, such as shape, area, and volume, consider these properties in abstraction from any physical object. The same is done by those metaphysicians, such as the Platonists, who think of Ideas as stand-alone entities. They are mentally abstracting forms from matter, but committing the mistake of thinking that such forms are really physically separable.
It may seem strange that we can successfully analyze forms as though they did not pertain to any definite object, which can never be the case in physical reality. Mathematical forms are related to each other in a logical or formal structure of hypothetical thought. Supposing Proposition A to be true about some mathematical forms, then Proposition B would also be true. The fact that there is some relational structure among mathematical forms, independent of their existence in some determinate object, suggests that forms have some existence independent of matter, as Plato thought. Yet for Aristotle, the reality of forms did not imply an existence separate from matter. Form is separable from its matter only in thought, not in natural reality. This was especially obvious to Aristotle since, in his time, mathematics was abstracted from motion, i.e., static, and therefore removed from physical reality.
Today, we have a much more powerful mathematics, which can model motion by treating time as a parameter along some curve. Since we can now discuss dynamics in mathematical abstraction, the distinction between mathematics and physics is less obvious. It is clear that mathematics may be invoked to analyze the dynamics of natural objects, yet the same mathematics has broader application, and can be considered in abstraction of any determinate physical system. Indeed, the mathematical formalism will hold just as well whether we interpret the parameter as time or as simply another real-valued variable. Our interpretation of differential calculus as representing motion is just one of many possible models; more generally, it describes the correlative variation of two or more variable quantities. We can interpret this as a “rate of change” only if we define one variable to represent time. Although it has physical applicability, mathematics as such is concerned only with number, extension, and the variation of these, abstracted from any physical object. Those sciences which use mathematical objects are concerned with them, not qua mathematical objects, but as models of physically real properties. Since the physical application is but a special case of the generalized mathematics, the logic binding the mathematical structure will also bind the structure of the physical system, insofar as the mathematical form truly characterizes the physical system.
When a mathematical form characterizes some physical property or relation (or system of such), it might be said that the mathematical form and the physical form are one and the same. This does not mean, however, that the physical form is separable from its matter. The distinction between mathematical form and physical form is mental. A form is mathematical when we consider it in abstraction from any determinate object bearing it.
The essential objective in physics is to explain changes in the sensible world, by appeal to more fundamental principles of change. To define the scope of physical inquiry, we must have a sense of what it means to explain a phenomenon. Aristotle famously identified four kinds of explanation, which have come to be known as the “four causes.” This nomenclature is unfortunate, since most of these do not correspond to what we would understand by causality. Still, the Latin causa does capture something of the Greek aition, which literally means “blame.” When we explain something in terms of more fundamental principles, we are answering the question “Why?” in one of its several possible senses. We are identifying those factors which are “to blame” for a phenomenon, i.e., are responsible for its appearance.
Some might object that physics is not concerned with the “why,” but only the “how” or “what.” If physics were truly restricted to the “what,” it would be purely descriptive, not explanatory. At most we might describe certain properties as correlated to other properties, yet even this entails an inquiry into some underlying formal relation, which is at least implicitly explanatory. Scientists have proven many such formal relations, so it is senseless to affirm that physics is restricted to the “what.” Still, it might more credibly be asserted that physics is concerned with “how” rather than “why,” as the latter seems to imply purposive intent in nature. Yet in the act of describing “how” a physical process occurs, we necessarily introduce material, formal, and efficient causation as explanatory factors. That these are within the scope of the question “Why?” can hardly be disputed, once it is admitted that this does not directly require purposive intent. Indeed, we will see that even the controversial “final cause” of Aristotle does not immediately require intentionality in nature.
Aristotle’s discussion of the four “explanatory factors” (following Richard Hope’s felicitous translation of aitia) is famously “object-oriented,” but there is no reason why these factors need to be limited to accounting for a substantial object. They might just as well be applied to account for the appearance of a property, or to account for the process of change itself.
If we want to account for the presence of some substance, or some property, or some process, we need to consider some underlying material aspect. We have already seen that matter is a necessary presupposition of sensible substance, from which it follows that any property, being existentially dependent on some substance, likewise depends on matter for its manifestation. If we were to deny the need for matter, we must deny that there is anything persistent in the reality of natural objects. The only viable alternative to our ontology is process ontology, such as that of Heraclitus. Yet if all that exists is change itself, and not some thing that changes, it is not clear what we are affirming to exist. Further, if there is no persistent aspect to reality, all that exists is a succession of states, with nothing to bind them, as in a continuous generation out of nothing, which is impossible according to the consensus of natural philosophers. Were generation out of nothing as such(i.e., for no reason) possible, then absolutely anything could occur at any time, which is utterly contrary to what we observe.
Most natural philosophers, ancient and modern, uncontroversially accept the need for some material factor in physical explanations. In fact, the pre-Socratic Ionians all tried to explain the world exclusively in terms of material factors, without reference to form. In hindsight, we can see that their account of matter included some elemental forms, but we may still consider their explanations materialistic in some sense. The underlying substance beneath a more macroscopic form may be considered the “matter” of the macroscopic substance. For example, bronze is the matter of a statue, though bronze itself, on a chemical level, has its own proper form. Since the essential chemical characteristics of bronze are not altered in the process of sculpting the statue, both the matter and form of bronze can be said to constitute the persistent “matter” that is changed into a statue. In short, material explanations of change appeal to some underlying persistent substratum beneath the sensible change.
Material explanations are obviously incomplete, since the underlying matter must itself have form, unless we are speaking of so-called “prime matter,” abstracted from all properties. Prime matter, being utterly homogeneous and persistent, cannot suffice to explain change, especially qualitative alteration. So we must introduce form as an explanatory factor at some point.
A form consists of characteristic qualitative and relational aspects of a being, enabling us to say what kind of a being it is. Identification of form can have physical explanatory power to the extent that form defines how a being can initiate certain kinds of change. For example, when we determine whether a certain being has the form of an “electron” or a “neutron,” we can know whether it is capable of electromagnetic force interactions. Knowledge of the characteristic properties of these forms helps explain determinate physical activities, so such form deserves to be called an aition in physics.
Formal causes can be misused, however, as not all forms have physical explanatory power. Failure to recognize this truth was a principal cause of the stagnation in theoretical physics among Aristotelians. Once it is believed that you have explained a thing physically by giving a formal definition, then physics is reduced to purely verbal analysis. In our examples of the electron and neutron, by contrast, knowledge of natural characteristics is gained only after painstaking observation and testing hypotheses. Only by repeated testing can we learn, to a high degree of probability, which properties are essential and which are merely accidental (i.e., dependent on particular circumstances, and not intrinsic to the kind of thing being studied).
Since a form is typically characterized by a quality or set of qualities, the question of which forms are natural came to be expressed as a distinction between primary and secondary qualities. Primary qualities are real physical properties that explain sensible changes, while secondary qualities are merely superficial, derivative appearances that have no physical power of themselves, save that of the primary qualities that underlie them. For example, we might consider “roughness” to be a secondary quality, as it is merely an epiphenomenon resulting from the shape and spatial distribution of a substance’s constituent particles. Still, even secondary qualities might be treated as natural forms, as the roughness or smoothness of a surface can have real physical explanatory power, accounting for the strength or weakness of the force of friction.
Some secondary qualities, however, have no physical explanatory power whatsoever. Whether a piece of clothing is classified as a cardigan or a sweater tells us nothing about physics. The distinguishing qualities of such garments are chosen arbitrarily for ease of classification, but do not correspond to a distinction in natural principles of change. We may impose such arbitrary distinctions even on natural objects. We might classify plants by how many branches they have, but this would not identify natural species. When we categorize natural objects into types, we must take care to show that these distinctions are correlated to different abilities to effect physical change.
Form may consist of relations, rather than qualities inhering in a concrete object. In modern physics and chemistry, we more commonly express form in terms of mathematical equations, which signify formal relations among various physical properties. Thus form may be an explanatory factor of properties abstracted from substances, and even of processes. This means that the use of “formal cause” need not confine us to an object-oriented physics.
The third type of explanatory factor, traditionally called “efficient cause,” agrees with what we ordinarily mean by a “cause,” as in “cause and effect.” This was true in Aristotle’s time no less than our own. Still, this common notion is not so perspicuous when we examine it closely. A cause is the primary source or origin of a physical change (or resistance to change). It is not immediately clear why this description of our intuition should not just as well apply to the material and formal “causes” discussed earlier. In fact, Aristotle allows that the formal and efficient causes may in some cases coincide, but why should not the material be regarded as the primary source of a change?
Recall that matter as such is the persistent aspect of a being, and the unchanged as such cannot account for change. Thus matter cannot be an efficient cause of physical change. Matter, however, never exists by itself, but as an aspect of some substance with definite form. Thus matter might be considered to contribute to efficient causation incidentally, by making possible the existence of the substance that effects change.
Efficient causation is something much broader than the intuition of substances affecting other substances. The efficient cause of a change might not be a substance, but rather some other change or process. This agrees with most modern physical analysis of causality, where we speak of events causing events. We consider one process or activity to be the origin or source of another process or activity. Transfers of momentum or energy are common examples of such causation.
In physics, it is commonly believed that an efficient cause is necessary to account for every natural phenomenon. When a scientist says, “There must be a rational explanation for this,” he generally means that there must be some physical efficient cause underlying an unexplained phenomenon. That efficient cause or origin of change may be a substance or process; in any case, there is a sense that we cannot get something for nothing. Physical change does not occur for no reason. While it is impossible to prove empirically that this will always be the case, so far science has been able to progress admirably on the assumption that every natural phenomenon has some underlying efficient cause.
The notion of efficient causation is relevant to our conceptualization of time, since an efficient cause must be temporally prior to its effect, or perhaps simultaneous with it, but never posterior to it. Thus the direction of efficient causation is aligned with our intuitive direction of time. This has led to a convenient shorthand in relativistic models, where “events” are represented as spatiotemporal points, which may be in causal relationship with each other depending on their relative location with respect to each other’s “light cone.” This useful representation should not be taken as proof that spatiotemporal points are themselves “events” (i.e., processes) with causal efficacy. Rather, these points indicate possible “locations” of events, which may or may not be efficacious, depending on their determinate physical activities.
At first glance, it might seem that “efficient cause” is just another name for what we have already called a “nature,” since both are described as an origin or source of sensible change. In fact, these concepts are overlapping, but not coextensive. A substance is said to “have a nature” only insofar as it contains a principle of change by virtue of its intrinsic constitution. Yet there can be causation even when there is no such intrinsic principle acting directly. We may distinguish between when an object changes or moves by virtue of its intrinsic constitution and when it is changed or moved by some extrinsic force, as in the classical distinction between “natural” and “violent” motion. An external force may result directly from the intrinsic or “natural” impetus of some other substance, or it might be merely an effect of some other external force. While it seems that all violent motion must ultimately be referable to some natural origin, it is nonetheless clear that efficient causation can occur even when there is no “nature” directly at play.
A “nature,” we have noted, may be defined by the form toward which it tends, so it is no surprise that “nature” and “efficient cause” may coincide, insofar as a form may act as an efficient cause. That is to say, the properties or intrinsic constitution of a substance may serve as the origin or source of some change. Yet there may be efficient causation even without direct reference to a substance’s form, as happens with violent motion. We may see a process, expressed as force or energy, as itself an efficient cause, without reference to anything substantial.
Still, it would seem that efficient causation ultimately depends on some kind of form, not only because all violent change is ultimately referable to some natural origin, but also because even extrinsic forces possess a kind of form. This form is expressed in terms of mathematical relations such as so-called “laws” of conservation or force equations. Here the form is not (so far as we know) the set of properties of some substance, but the structure of a relation between two or more properties or activities, considered in abstraction from their determinate subjects.
Demonstrating the presence of causation by observation can be problematic, so much so that several empiricists, notably Hume and Russell, have argued that science could do without the notion of causality. Instead, we may speak only of phenomena being more or less strongly correlated. If we observe that phenomenon B always occurs after phenomenon A, we can use A to predict the occurrence of B, but there is no way to prove that A is in any way the source or origin of B. Succession in time is no proof of origin, since it could be that both A and B originate from some unknown cause C, which generates A and B sequentially.
This critique of causality would be sound if physical analysis were restricted to mathematics. There is indeed no way to demonstrate causality from purely quantitative relationships. In Newtonian mechanics, where an initial state A strongly determines a final state B via a fixed equation, it is no less true that B determines A, but no one says that the future causes the past. Some physical relations, such as statistical mechanical phenomena, are not time-reversible, but this does not prove that each successive state is caused by the prior. If anything, the mathematics suggests that the formal relations described by equations “cause” or generate the succession of states, though each state, once determined, affects the determination of a later state. It is commonplace in science to admit that “correlation does not prove causality,” but then what does?
It is one thing to deny that we can formally demonstrate that one thing is the efficient cause of another, but it is another thing to say that there is no causality in nature. If there were no causality, but just a succession of things that “happen to happen,” our physical explanations would be reduced to mere description, no matter how quantitatively detailed they may be. Further, it would follow that it is not at all necessary for any phenomenon to have a natural origin or source. In this view, we routinely get something out of nothing all the time, but if this were so, there would be nothing to prevent absolutely anything happening, without respect to any formal mathematical law or structure. This is so far from what we observe that we need not consider it.
While it is practically certain that there is causality in nature, as is the common consensus of scientists, who indeed pursue science in order to understand causes, it is nonetheless problematic to demonstrate that something is a cause in a determinate instance. If we are to have any hope of showing physical causality, our analysis must admit something besides mathematics.
Causation can be apprehended by observations of a different order than quantitative description. This is often characterized as perception of a “mechanism” by which a phenomenon occurs. Consider yourself walking on a sidewalk, and stepping on a hard, cylindrical unopened pine cone. You feel your foot roll with the pine cone and lift into the air, upon which you lose your balance and fall to the pavement. It would be extraordinarily obtuse to deny that your fall was caused by stepping on the pine cone. Even though we have not made any statistical analysis of repeated trials, you are convinced of causation because you felt the resistance and rolling of the pine cone affect your step, and felt yourself lose balance as your foot rose into the air. This continuum of force interaction, by virtue of its immediacy, presents powerful evidence of causation as an origin of action. When we directly perceive one action seamlessly growing out of another, there is no longer a question of mere correlation between two discrete events, but instead we apprehend a continuous whole from start to finish. If you were to fall several steps after stepping on the pine cone, by contrast, you may be less certain as to whether the first event caused the latter. Other intervening factors may have contributed, so that the sequence of events was mere coincidence. Immediacy and continuity are necessary to convince us of causality.
This is amply borne out by how physicists have historically conceived of causality. Some of the most convincingly causal phenomena are those of mechanics, where there is direct contact between objects, transferring force, momentum, or energy from one to another. For this reason, physicists were long reluctant to admit any action-at-a-distance, and have tried to explain forces between distant objects in terms of direct contact with a local field. The seamlessness of causality is even more apparent when we consider, instead of interacting objects, the continuous change of a single object, as in local motion, growth, or alteration. If we were to deny causality here, we would have to admit an innumerable succession of states not caused by each other or sharing a common cause, though they are infinitesimally removed from each other (in location, size or quality, in the examples given). This is no less absurd than denying the reality of change, insisting that one thing pops into existence after another, with no underlying persistent being. Clearly, it is infinitely more parsimonious to admit there is a single underlying cause for the continuum of action. Whether we conceive of this as one state causing the next, or as a single cause generating the continuum of states, the reality of efficient cause is unavoidable.
Our analysis is consistent with the basic idea that efficient cause is a source or origin of action, for in the continuum described, each action is an outgrowth of what preceded. It is no accident that causality produces successive actions in time, since time itself is defined by the succession of events. In fact, it is not altogether clear if the notions of causality and time are separable.
When we are dealing with sensible change, efficient cause is an origin with respect to time, which is the measure of sensible change. Still, it is conceivable that there could be a broader notion of efficient cause, which is an “origin” of change in an atemporal sense. Such a cause would be properly metaphysical, insofar as it is not immersed in the order of time that defines sensible changes.
The so-called “final cause” or telos is by far the most controversial of Aristotle’s four explanatory factors, since it seems to invert cause and effect and bring conscious intentionality into nature. This perception results from abuses of the principle found among the late Scholastics, as well as misinterpretations of Aristotle, resulting from his frequent use of analogies between art and nature. In fact, the telos does not immediately imply intentionality or deliberation in nature, nor is it meant to be a “cause” in the sense of a source of change.
The notion of final cause entails that certain natural processes tend toward some definite end, or at least along a definite trajectory, which is a path or endpoint “preferred” over others. This “preference” is not to be understood as a conscious choice (except when a conscious being is the agent), but it simply expresses that the process is so structured as to tend toward one outcome (or set of outcomes) rather than another, and this is not by mere chance or materialistic necessity.
Those who deny that there is any telos in natural processes must explain them in terms of chance or necessity. For example, if rain falls and nourishes crops, or spoils them by falling too frequently, it should not be said that rain fell in order to nourish the crops or to spoil them, but that this was mere chance or coincidence. In other words, the causal mechanism of the water cycle is incidental or accidental to the needs of the crops. Yet even here, there is not pure chance or coincidence, since rain arises by a definite mechanism, as is proved in part by the fact that it is regularly more frequent in certain seasons, which should not be the case with a purely random phenomenon. Any natural process that regularly leads to a similar end (even if outcomes are not strongly determined or identical) cannot be explained by chance alone.
The question remains whether natural processes are determined by materialistic necessity or tendency toward some form. Ontogenesis provides a test case for these interpretations. The embryological development of an animal proceeds through definite stages, in apparent analogy to building a house. Just as each intermediate stage in house construction exists not for its own sake, but as a means to the next stage, and ultimately the completed stage or telos, so too do intermediate stages of embryological development have no function other than to make possible the completion of the mature form, which alone is capable of acting as a fully autonomous animal, finding its own food and reproducing.
This teleological interpretation need not exclude the presence of mechanistic causality in the execution of the process, just as there is efficient causation in the building of a house, but there is still a role for the telos in explaining the intermediate stages. Recall that the telos is an explanatory factor, not a “cause” in the ordinary sense. We should not have given a full physical explanation of ontogenesis if we just described the mechanism in each stage of development, while ignoring that the process as a whole is tending toward a definite form. A telos has explanatory power even in modern biology, as it is helpful to study embryonic structures in light of the function they will have in a mature form. Lungs are useless to a human fetus, yet they must develop early enough so that they will be ready to function after birth. We cannot fully account for their presence in the fetus without reference to the mature form.
Objections to teleology in nature are grounded in the belief that this directly implies conscious planning or intent in natural processes, analogous to that of an architect designing a building. No such implication is necessary; in fact, Aristotle uses the telos to account for phenomena in organisms that emphatically do not act by conscious art or deliberation. Examples include a spider spinning its web in order to catch prey and a plant growing leaves for the benefit of its fruit. No philosopher contends that the spider or the plant are capable of deliberation in these complex processes. Nonetheless, the processes cannot be explained without reference to their end. If we said that a spider spins a web only because it happens to have spinnerets with this capability, we would not be fully accounting for the web. The need to capture prey to eat is highly relevant to this process, even if the fulfillment of this need is not mediated by conscious planning. Likewise, the function of leaves in nourishing and shielding fruit cannot be ignored in our physical account of leaves, even before the fruit actually forms. It is precisely because such processes do not happen by art, but by nature, that we attribute a telos to nature.
A telos can be a useful explanatory factor even within the current neo-Darwinian paradigm in biology, which explains phylogeny in terms of natural selection of random variations. The notion of adaptation implies an end; a creature always adapts “to” something; i.e., its constitution is changed for the sake of maximizing its chances of survival and reproduction in its current environment. There need not be any conscious intent in such variation; changes in genetic code may be accidental to the end of survival, while only those changes that happen to improve chances of survival and propagation will be more widely occurrent in the long run. The telos is not a conscious goal, but the consummation of a process. When a population of similar creatures is sufficiently well adapted to sustain its numbers, the relative absence of threat diminishes the need for constitutional change, so we have a more or less stabilized type that we identify as a species. The form of the species was not a preconceived goal, but the consummation or completion of a process of adaptation, which might be renewed again as new dangers arise.
A natural process toward some definite end or trajectory might be impeded by external forces, no less than any other natural motion. We may say such a process is frustrated when an extrinsic factor causes it to be terminated in some intermediate stage, or else redirected toward some abnormal outcome. This occurs, for example, with monstrous births caused by exposure to chemicals or radiation. Yet Aristotle, not being a strong determinist, allows that even the natural process itself can “make a mistake,” occasionally producing the wrong outcome. This interpretation is compatible with modern findings that many biological and chemical processes are sufficiently complicated as to have a stochastic aspect, best modeled on the assumption of randomness within constraints. In any case, failure to produce the “preferred” outcome (i.e., the outcome toward which the process is structured) is accidental to the teleological aspect of the process, and such alternative outcomes do not constitute alternative teloi.
While a telos is not a conscious goal, it is still something more than a mere endpoint (eschatos). The temporal end (eschatos) of an animal is its death, yet death is by no means the consummation or completion of the animal, but its corruption and disintegration. Death is not the telos of the animal, but its undoing. While it is alive, the animal’s biological processes are structured to help it survive as best they can, until they can achieve this no longer. In many species of insects and fish, an individual is designed to sacrifice itself at a certain stage of development, once it has secured the generation of offspring. Even here, death is not a telos, but a means to another end.
Teleology need not be confined to biology, if we admit that a telos need not be a definite endpoint, but is instead a preferred trajectory or tendency that may continue indefinitely. In physics, this may be represented by the direction of a force or momentum vector, for example. Motion of an object characterized by such a vector may be explained as definitely aiming in some direction, so that we may use this direction to explain the motion, rather than seeing it solely as a succession of states causing each other. Admitting the reality of this definite tendency or aim enables us to project and predict future motion along the preferred trajectory, which will certainly occur unless something intervenes. Telos is a real explanatory factor, adding to our understanding, even if it does not correspond to a distinct causal agent.
In reality, the telos may sometimes be identical with the form or with the efficient cause. The example of ontogenesis suggests an identity with form, while a force vector represents both the efficient cause and the future trajectory or telos.
Even with these clarifications, the arguments for natural teleology may prove unconvincing, if we could show that all apparent finality in nature is adequately accounted by some combination of chance and material necessity. To consider such a possibility, we must first clarify what is meant by chance and necessity in nature.
The terms “randomness” and “chance” are often used in the context of physical explanations, but it is not clear if such concepts really explain anything, or if they declare the absence of an explanation. Further, it is unclear if there really is any such thing as an essentially random physical process, or if events are random only accidentally, meaning they are not directly linked to each other by causality, though each is driven by its own chain of efficient causes. Natural randomness or chance may also be confused with “luck,” which is chance considered in the context of outcomes we expect or prefer.
There is certainly such a thing as luck or chance at least in an accidental sense. To use Aristotle’s example, if someone goes to a marketplace and happens to encounter someone he wished to meet, such as a debtor, he would consider this to be lucky. This means merely that he went to the marketplace for some purpose other than collecting repayment, this latter benefit being accidental. It does not imply that he or his debtor went to the marketplace for no reason.
Sometimes natural philosophers have appealed to luck or chance as an explanation of origins, when they have exhausted the materialist causes they ascribe to terrestrial affairs. Thus the heavens are said to have been created by random motions of matter, organizing itself into definite forms. Yet it is strange, Aristotle observes, to try to reduce terrestrial affairs, where we observe luck or chance, to material necessity, and at the same time to ascribe the formation of the heavens, which behave with regularity, to chance. A modern analog of this paradox is found among modern materialists, who try to reduce all biology, including humanity, to deterministic necessity, yet invoke randomness or chance when explaining the origin of species, or of celestial bodies, or of the cosmos and the natural order itself. Why bother demanding physical explanations for complex phenomena if chance is at the bottom of everything?
It would seem that such invocations of chance or randomness are designed to dismiss objections to gaps in one’s physical theories, or to put a limit to scientific inquiry. If it were really the case that chance is a fundamental principle of nature, it should be incumbent upon scientists to produce a theory of natural randomness. Only quantum mechanics might constitute or at least approximate such a theory. All other scientific theories that invoke randomness merely borrow from classical mathematical probability theory, supposing an underlying physical determinism.
Any theory of randomness ought to begin with a clear definition of the subject of inquiry. Aristotle attempted to define “chance” and “luck” as follows. We attribute to “chance” those events that happen to some end or result, but not with the end-result in view. “Luck” is a special case of chance, when we are dealing with possible choices made by beings capable of conscious choice. “Chance” phenomena can be driven by some efficient cause which results in some end, but not for the sake of that end. In other words, the end result is not a natural telos of the process that yielded it. This can be demonstrated by multiple iterations of the process, which only occasionally yield that result. If the process contains no intrinsic tendency toward the result in question, then we may say the outcome is due to chance.
This preliminary definition can be refined with the aid of modern probability theory, though we already see that randomness and teleology are competing modes of explanation. We attribute an outcome to chance insofar as we deny that there was any teleological link between process and result. We need not deny efficient causation, for it could be that the outcome results from two or more causal factors that are independent in their immediate origin.
Classical probability theory, developed in the ethos of seventeenth-century mechanism, presupposes an underlying determinism. Randomness is only apparent, with respect to our ignorance of determinate initial conditions. All physically possible states are assumed to be equally likely, and the probability of an outcome is computed by taking the number of states corresponding to that outcome as a fraction of the total number of possible states.
If we are dealing with physical possibility (as opposed to mere logical or conceptual possibility), then “possibility” must mean that which can be done physically, i.e., by some physical process. There are two ways in which the same physical process can produce different results. First, the process may start from various initial conditions that affect outcome. This is compatible with strong determinism, and in such cases apparent randomness is due to our ignorance of initial conditions. Second, it could be that the process itself is not strongly deterministic, so that it may produce different outcomes even from identical initial conditions. Quantum mechanics seems to describe such processes, though some have argued that there is an underlying determinism to these phenomena.
In the first case, where apparent randomness is due to ignorance of initial conditions, the classical assumption of equal probability of all possible final states presumes that the distribution of initial states is not biased toward one outcome over another. This can only be the case if the process of preparing an initial state is indifferent to the outcome of the trial in question, hence the assumption has been called the “principle of indifference.” In practice, this usually means that the process of preparation is sufficiently complex as to defy prediction, even in a deterministic system, due to the difficulty of computation or sensitivity to initial conditions. Familiar processes of preparation include shuffling a deck of cards or shaking a die, both of which, though presumably deterministic and causally linked to outcome, are sufficiently complicated as to effectively “randomize” the initial state, so that we are no more likely to prepare a state that favors some particular outcome.
The principle of indifference also entails that the process itself is indifferent to outcome. We assume that the act of a released die tumbling through the air does not favor one outcome over another. When this assumption is false, as in the case of a loaded or weighted die, we will have an uneven distribution of final states.
We can test the principle of indifference empirically by conducting repeated trials and computing the frequency of each final state. We might even choose to define probability in terms of such frequency, which would allow us to treat systems where discrete states do not have equal probability.
This talk of “indifference” or “favoring” outcomes seems to anthropomorphize nature, which begs the question as to how we should interpret probability. Is it just a measure of our ignorance, with equal probabilities grounded only in a “principle of insufficient reason”? Or is it a measure of a real propensity or tendency for a process to produce one outcome rather than another? A third interpretation would be that the only physical reality to probability is the frequency of outcomes; what we call probability is just a useful computational tool. This frequentist model takes probability outside the realm of pure mathematics, for we cannot know the probability of anything without taking some real measurements.
Frequentist probability would define the probability of an outcome to be the limit of the ratio of that outcome’s frequency to the number of trials as the number of trials becomes arbitrarily large. There are some situations where we can know this limit with exact certitude, as in drawing from a deck of cards. Given that we always cycle through the same deck of cards, the probability of drawing the ace of spades is certainly 1/52 in the long run. Yet what about rolling a die? What is there to guarantee, for example, that we will eventually roll a 4? We can invoke the “law of large numbers” only if we rely on the axiomatizable probability theory of mathematics. If probabilities are grounded in frequency, on the other hand, we have no such guarantee, since we cannot assume the equal probability of states, but must observe it.
An apparent physical correlate of the law of large numbers is the ergodic principle, which states that all physically accessible microstates become equiprobable over a sufficiently long period of time. This is a theoretical assumption, not something proved, and it can only be approximately verified in particular systems by observation.
In probability theory, we characterize events as “independent” if one event’s outcome does not alter the probability of another event’s outcome. In this situation, we can find the combined probability of two outcomes (one in each event) by simply multiplying the probabilities. On the other hand, we have dependent or “conditional” probability when the outcome of one event affects the probability of a subsequent event. If we have drawn the ace of spades from a deck, then the probability of drawing the queen of hearts now becomes 1/51.
It could be that, in reality, all probability is conditional. After all, when we ask “What is the probability of X?” as a physical problem, this is never done in a void, but with some determinate physical assumptions or conditions. Thus the range of possibilities is limited or conditioned by physical laws or states. If there were absolutely unconditional randomness, with a total lack of constraint on outcome, we would deny the first tenet of natural philosophy, namely that nothing comes out of nothing as such. Even the apparently natural randomness of quantum mechanics operates within some definite (though perhaps not absolute) constraints.
If all probability is indeed conditional, then our characterization of events as “independent” can only be an approximation. All physical events are causally related if one goes back far enough (in the order of causality, though not necessarily in time), so when we say they are “independent,” we just mean that their causal kinship is sufficiently remote that their probabilities are now so weakly correlated as to be unmeasurable. This can happen swiftly even in systems supposed to be deterministic, such as weather patterns, due to hypersensitivity to initial conditions.
Probabilistic independence, nonetheless, is an axiom underlying the theory of quantum mechanics. The probability space of a system is indirectly represented by a Hilbert space spanned by vectors representing possible states. The squared inner product of vectors in this space yields probabilities, so orthogonal vectors represent mutually exclusive states. A Hilbert space is applicable to quantum mechanics because of the theory’s superposition principle, which supposes the absence of interaction among eigenstates; i.e., their independence. Without such supposition, you could not have the sum of the probabilities of eigenstates equal unity.
For randomness to have an explanatory role in physics, it must be somehow related to physical causality. If natural randomness were genuinely acausal, it would not be an explanation, but the absence of explanation. We might more readily accept natural randomness if it is mere indeterminacy of outcome within a causal mechanism; i.e., the same cause may produce various outcomes as possible effects.
Non-deterministic systems may actually be more clearly expressive of causality than deterministic systems. Strong determinism has a time symmetry where the future determines the past no less certainly than the past determines the future. In statistical mechanics, by contrast, we find temporal asymmetry, so that an initial state clearly leads to a future state by some stochastic process which is irreversible. For example, a drop of ink in a tank of water will eventually be dispersed evenly throughout the tank, but we will never see the reverse process, though it is not absolutely impossible. The reason for this bias from heterogeneity to homogeneity is that there are innumerably many more possible states resembling a homogeneous state than there are for the ink to remain concentrated in a drop. Here we use classical probability theory, appealing to the sheer number of possibilities.
Yet, in quantum mechanics, we have weighted probabilities even at the most fundamental level, without counting numbers of microstates. These unequal probabilities may be modeled by the orientation of a state vector with respect to eigenvectors, the latter corresponding to observable states. Still, we have a temporal asymmetry, due to the “collapse of the wavefunction” following a measurement (i.e., an interaction) and the time evolution of the wavefunction between measurements.
The conceptual problem of quantum randomness is that it appears to be fundamentally acausal, which is why so many physicists have fiercely resisted the Copenhagen interpretation. Although the time-dependent Schrödinger equation is strongly deterministic, and probability distributions are predictable, there is apparently no reason whatsoever why, in a given measurement, the outcome is one value rather than another. So perplexing is this problem, which seems to make a mockery of the need for physical explanation, that some have dared to postulate that all possibilities are in fact realized, in uncountably many universes. This highly unparsimonious “many-worlds” hypothesis would posit an infinity of universes to explain one, which is hardly less illogical than acausality.
If, however, we admit the possibility of physical non-determinism as consistent with causality, then there may be no reason to regard quantum randomness as acausal. After all, no quantum state comes out of nowhere, but is within the range of possibilities (or potentialities) defined by a wavefunction. The causal process is structured in such a way to admit various outcomes with differing frequencies. We cannot say “why” this outcome rather than another possibility was realized, but the outcome nonetheless has an intelligible cause. The fact that this cause had the power to effect other outcomes does not make it less efficacious in producing the observed outcome.
The admission of genuine randomness in nature, nonetheless, would entail confessing that physics cannot be a complete explanation of reality. Yet we should never have expected it to be, since from the outset we have confined its domain to the sensible world, and any physical theory relies on certain “givens” that are not to be explained by anything else.
It is no less common to appeal to “necessity” as a mode of physical explanation, and indeed this was the dominant mode during the heyday of strong determinism (17th-19th centuries). In the Scholastic period, a different sort of “necessity” was invoked, that of formal rational demonstration. When we speak of “necessity” in physics, we must be careful to indicate its similarities and differences with more abstract notions.
Physical necessity, much like chance, can be an anti-teleological mode of explanation. When a process apparently acts toward some end, we may deny this by saying that it acts out of necessity. In other words, it is not with any end in view, but because it cannot act otherwise. Earlier, we noted that the vectorial tendencies of bodies may be interpreted teleologically. A necessitarian alternative is to view such tendency as a strongly determined quality of a body.
When we say something cannot act otherwise, we refer to physical impossibility. Thus physical necessity ought to be distinguished from logical, metaphysical or mathematical necessity. Something may be physically impossible while still being logically conceivable, i.e. involving no logical contradiction, or metaphysically possible, i.e., possible under some logically and metaphysically viable natural order other than that observed in our world. Mathematical necessity is akin to logical necessity, except we add certain axioms about number and extension to the axioms of logic. Physical necessity need not imply these other kinds of necessity.
It may seem that the converse should hold, i.e., that anything logically, metaphysically or mathematically necessary should also be physically necessary, since the former are stronger conditions. Yet this ignores that physical necessity differs also in kind, not just in scope. Logical, metaphysical or mathematical necessity should not be mistaken for physical causes or explanations, even though it is true that physics cannot violate these higher criteria. Aristotelian physics languished under this error of supposing that giving formal reasons was equivalent to giving physical explanations.
Physical necessity means that an effect arises under strict compulsion resulting from some natural principle, which at the same time prevents any alternative effect from being produced. Strongly deterministic systems operate under physical necessity. The “necessity” derives from the constitution of the natural principle, which is such that it invariably produces the same effect in a given condition.
Note that necessity is not the same as having a probability of 1. When we say something is necessary, we mean that there is some reason why this must always be so, not simply that it is always the case that this is so (as in so-called material implications). In the case of physical necessity, there is some physical reason or explanatory factor that accounts for the inevitable outcome.
Necessity may be contrasted with contingency, with the latter meaning that something is only occurrent under certain conditions. A contingent phenomenon depends on the occurrence of something else. That occurrence in turn may be contingent or necessary. If it is necessary, we have no need for recourse to further contingencies, for it is physically impossible for things to have been otherwise.
Even strongly deterministic physics may rely on contingencies, since the necessity it describes is contingent upon some determinate set of initial conditions. A strongly deterministic physics would not be fully necessitarian unless it was also affirmed that the initial conditions, or some infinite regress of conditions, were a matter of physical necessity. This would be akin to what the ancients called Fate, and indeed the Greeks used the same word ananke, which means compulsion, to describe fate or necessity.
There is another kind of necessity, called suppositional, where we impose constraints in view of some supposed end. For example, when we some surgery is physically necessary, we do not mean that it is strongly determined, but that it must occur on the supposition that the patient’s life is to be saved. In other words, the process is needed in order to obtain a specified result. This kind of necessity may impose constraints on the form an object may have. For example, if a tool is to be used for cutting, it can only have certain kinds of shapes and sufficiently durable material.
Suppositional necessity is compatible with teleology, though it does not require consciously intended goals. “Given that X performs a certain function, its material and formal aspects must fall within these parameters.” Such reasoning is helpful to physical investigation, and is properly in the domain of physics, since we use our knowledge of nature to determine what the constraints must be. We should not take such reasoning, however, as proof that the eventual function or result is an efficient cause or intended goal of the physical form. This mode of argument is found in modern discussions of the “anthropic principle” and other attempts to explain apparent “fine-tuning” in nature. Most physicists who use these arguments do not seem to fully appreciate that they are invoking a different kind of physical necessity from that of efficient causality.
Aristotle was anti-necessitarian in the sense of opposing material necessity. The structure of natural objects is not determined by elemental matter. Things do not organize themselves simply by their material constituents falling into place, sorted by heaviness or size. Formal principles must also be present in nature, and these direct processes toward definite ends. They do this not with absolute necessity, for they can often err, but they have a power of building. This conceptualization of natures as having powers of growth or construction, rather than acting under pure compulsion, enabled Aristotle to free himself from the fatalism of most Greek philosophy and religion. In a fatalistic world, nothing would really have the power to do anything, but all would be mere puppets of a mysterious Fate. Non-fatalism affirms that natural objects have real powers to effect change, and not just the appearance of such power.
This anti-fatalism helps explain why Aristotle opposed natural motion to violent motion. A natural power as such is not under extrinsic compulsion, but is a source of activity in the world. Necessitarianism would deny creativity and fecundity to nature, making it merely pushed about inertly. Such a view was held by medieval Arab philosophers, who taught that fire does not heat, but God creates heat whenever something is placed near fire. Early modern strong determinism is just a secular version of this physics of impotence, replacing God with mathematical laws. Newton and Descartes still ascribed these laws to God, making nature His instrument rather than a collaborator in creation. Atheists later invoked these laws as grounds for dispensing with the Deity, not realizing that they had also dispensed with nature as a creative power, succumbing to a new fatalism. The nineteenth-century vision of an eternal, steady-state universe, never adding or subtracting mass, energy or momentum, with all matter following inexorable laws of motion, was nothing less than a denial of natural power.
Despite his opposition to fatalism and strong determinism, Aristotle ultimately introduced a determinism of another sort, by pretending to decide physical questions from abstract deductions. This apparent logical necessity stifled empirical inquiry for centuries, and impeded the discovery of a physico-mathematical theory of dynamics. Medieval Christian philosophers, however, modified Aristotle to allow for God’s perfect freedom in the act of creation. This theological consideration led them to accept the radical metaphysical contingency of the natural order, a fact neglected by modern scientists. Even physical necessity would be a metaphysical contingency.
A revival of evangelical Christianity during the Renaissance emphasized freedom in creation and the immanence of Divinity in the natural world. The path was open for an affirmation of the creative power in nature, but this soon became submerged under the rationalistic theologies of the Reformation and the mechanistic determinism of the Age of Reason. Ironically, Christians now professed a sort of fatalism in physics (though still allowing for prior divine freedom), and it would take the atheist Nietzsche to reaffirm the fecundity of nature. Christians created the necessitarian physics of atheists, and an atheist discovered a conceptualization of physics fit for Christians and anyone else who sees the power of creation active in nature.
Continue to Part II
© 2014 Daniel J. Castellano. All rights reserved. http://www.arcaneknowledge.org