[Full Table of Contents]

14. Relativistic Spacetime

14.1 Special Relativity

14.1.1 Time Dilation

14.1.2 Lorentz Contraction

14.1.3 Minkowski Diagrams

14.2 General Relativity

14.2.1 Spacetime Curvature

14.2.2 Cosmic Homogeneity and Isotropy

14.2.3 Cosmological Constant

14.2.4 Metric Expansion of Space

14.3 Is Spacetime Substantive?

Modern relativity theory can be modeled with a four-dimensional construct called “spacetime,” which appears to make space and time different aspects of the same thing, rather than categorically distinct existents. This apparently contradicts our physical and philosophical understanding of the real distinctions between space (*locus internus*) and time. Still, a quantitative interdependence of space and time is consistent with the fact that each measures motion in a different respect. Place is the measure of the mobile *qua* mobile, and time measures a movement with respect to its termini.

Since the central insight of relativity is that local motion is relative, it is perhaps not too surprising that measures of time elapsed and length traversed are also relative. Yet we have seen that place and time are *not* mere extension, which is why they are distinct. The spacetime construct, however, treats only the extensive aspects of space and time, as hinted in our previous discussion. Now that we have established precisely what is meant by physical space and time, we can properly evaluate the degree to which the spacetime construct describes physical space and time, and judge what is meant by its apparent integration of these two physical existents.

The concept of “spacetime” is not essential to explaining special relativity, as is proved by the fact that Albert Einstein made no use of it in his famous papers of 1905. Nonetheless, Hermann Minkowski (1864-1909) showed in 1907 that special relativity could be modeled using a four-dimensional vector space, consisting of three Euclidean dimensions of ordinary space, and a “fourth dimension” corresponding to time, though not identical with it. More precisely, the fourth dimension measures *ct* (where *c* is the speed of light), which is the *distance* light travels in a vacuum (i.e., space without mass). Time as such is not an extension, though it can correspond to extension insofar as it measures some motion. Understandably, Einstein at first did not accept Minkowski spacetime as anything more than a mathematical tool, seeing that it was not a real metric space.

Minkowski defined the fourth dimension to take imaginary values *ict* (Diagram 1). Since *i*^{2} = -1 by definition, this convention yields a positive definite quadratic form on the vector space: *s*^{2} = *x*^{2} + *y*^{2} + *z*^{2} + (*ict*)^{2}. This has the advantage of defining a four-dimensional “distance“ function that always has positive values, just as in any real extensive space. It comes at the cost, however, of removing real number values from the fourth dimension.

If, on the other hand, we make all four dimensions real-valued (Diagram 2), consonant with their status as observable, measurable quantities (since we can only measure extensions to have real values), the quadratic form *s*^{2} on the space is no longer positive definite, as a minus sign appears in the last term: *s*^{2} = *x*^{2} + *y*^{2} + *z*^{2} - (*ct*)^{2}. This means the norm or magnitude of a spacetime vector can be negative, and “distances” (square roots of the norm, by analogy with Euclidean space) can have imaginary values.

Regardless of which convention we choose, spacetime does not really constitute a kind of four-dimensional extension analogous to space. It is at best a kind of continuous magnitude, not an extension, and even this spacetime magnitude should not be taken as a measurable physical existent, since it must either have imaginary values for one of its dimensions, or else negative values for its “distance squared” function. Both conventions suggest that space and time are not parts of the same extensive whole, but are instead related to each other in antithetic complementarity. Spacetime still recognizes some categorical distinction between space and time.

The credibility of Minkowski spacetime is based on its accurate representation of the theory of special relativity, so we should look only to the latter to determine the model’s physical significance. The strangeness of the Minkowski pseudometric arises from Einstein’s choice of *c* as an invariant speed. This bizarre postulate is motivated by the fact that it keeps the laws of physics, especially those of electromagnetism, in the same form in all inertial frames of reference (i.e., frames in which some object is at rest, or frames in uniform translational motion with respect to a rest frame). The resulting algebra, regardless of whether we choose to use the spacetime model, yields the unintuitive phenomena of time dilation and Lorentz contraction, which suggest an interdependence of the perceived magnitudes of space and time intervals, and undermine any notion that these have absolute values.
Minkowski spacetime is useful insofar as it gives us a four-dimensional interval measure *s*^{2}, which is the only frame-invariant spatiotemporal quantity.

Consistent with the classical philosophical insight that time always measures some motion, Einstein insisted on a purely empiricist concept of time, so that we cannot treat time as an abstract parameter, divorced from any measurement. Physical time has no existence separate from the motion measured, so time can only be analyzed in terms of measurements. Given that nothing can travel at infinite speed, we can only measure the time at distant points by inference, e.g., by sending a signal of known velocity back and forth to some remote point, and inferring that the signal reached that point in half the time it took to return to us. On a terrestrial scale, signal transit times are sufficiently small that we can consider spatially distributed events as simultaneous. Even on astronomical scales, we might develop a self-consistent definition of synchronization across remote locations, as long as we remain in the same inertial reference frame.

Einstein defined simultaneity by assuming that when we bounce a light signal off of a distant object, the travel time in each direction is equal, so that the light hits the other object in half the time of the round trip. This is purely a convention, and not the only possible choice. Henri Poincaré’s (1854-1912) equally valid convention (1900) made only the minimal necessary assumption that the round trip time is a constant.

The bizarreness of relativity arises when we compare time measurements in one reference frame with those of another frame moving at a relative velocity *v*. It turns out that, though the synchronization schemes of each frame are internally self-consistent, an observer in one frame will perceive that a moving clock “runs slow” compared to its rest frame. This “time dilation” works both ways. Suppose that I am moving 0.5*c* with respect to another person. In my frame, I perceive that his clock runs slow, and in his frame, he perceives that my clock runs slow. This seems to be a contradictory result, until we recognize that measuring the time of a moving clock involves transmitting a signal across some distance, which is non-trivial when dealing with relativistic speeds. We are not making a pure time measurement, but must take into account the strange interrelationship between space and time.

The phenomenon of time dilation derives mathematically from Einstein’s postulates that the speed of light is the same in any inertial frame, regardless of how fast its emitting source is moving in that frame. This fact, confirmed in numerous experiments, cannot be confined to a mere idiosyncrasy of light, as it carries important implications for the metric structure of spatiotemporal dynamics, i.e., local motion in general. If we tried to synchronize distant clocks using slow moving signals instead of light, or by moving synchronized clocks to distant locations, the time dilation effect would actually be greater. Synchronization by light-speed signals is the best that can be done even in principle, so time dilation is an inescapable feature of spatiotemporal measurement.

The invariance of the speed of light is profoundly counterintuitive, as it implies that a photon will not appear any slower even if I should move 0.5*c* in the same direction. This forces us to reconsider the geometry of space, time, and motion, abandoning any sense of absolute velocity. Instead, our perceptions of distance, time elapsed, and (sub-light) velocity are utterly contingent upon a specified frame of reference. In the example above, I ought to have specified what is meant by moving at a speed of 0.5*c*, since this is only physically intelligible with respect to some other object as a reference point, and we will find that a photon has no rest frame.

Relativity shows us how measurements of distance and duration vary by reference frame, while preserving the form of the laws of physics and the invariant Minkowski pseudometric. We can convert spatiotemporal values to different reference frames by Lorentz transformations, which involve the factor γ = (1 - (*v/c*)^{2})^{-1/2}, where *v* is the relative velocity of reference frames. In particular, γ gives us the proportionate magnitude of time dilation.

What can it mean for time to move more slowly? After all, if time is the measure of motion, with what could we compare the supposed speed of time itself? The answer is comparison with the timeline of another reference frame. We need not be dealing with astronomically distant objects, but may observe time dilation in particles moving at relativistic speeds (> 0.1*c*) here on Earth. The radioactive decay rate of high-speed muons is observed to be slower than the rate observed in muons moving slowly in our rest frame. This is a perspective-contingent result. In the rest frame of the high-speed muons, it will seem that slow-moving (with respect to Earth) muons decay more slowly. We compare how much of the same type of movement (radioactive decay) has occurred in relation to a given reference frame. The measure of this movement tells how much time has elapsed, if we assume that physical processes operate at a fixed rate in their own rest frame.

The alternative would be to suppose that every type of physical process changes its speed according to the laws of relativistic dynamics. This would be to situate the dilation not in “time,” but in physical process or change itself. This requires us to be able to define the speed of a process at some determinate location, yet this is contrary to relativity if physical processes are constituted by local motions. The assumption that each process operates at a determinate nonzero speed in its own rest frame is incoherent, insofar as it makes no sense for a process constituted by internal local motions to have a single rest frame, except as a classical approximation.

Time dilation does not imply an *intrinsic* slowdown of physical processes, nor should we expect this, since it would make intrinsic reality dependent on external measurement. We must keep in mind that time dilation is a feature of *measurements* as compared in different reference frames.

To describe it by simply saying “Moving clocks run slow” may be convenient, but it is also somewhat glib and misleading. For one thing, this statement suggests, quite contrary to relativistic ideas, that there is something absolute about motion. And, equally unfortunately, it suggests that some essential change occurs in the operation of the clock itself... [A.P. French, Special Relativity, p. 105.]

The “moving clocks run slow” description is inadequate, since we cannot unequivocally determine which clocks are moving or at rest, and this gives the impression that some intrinsic natural process is being altered, situating the dilation in this process rather than in time itself. The phenomenon of time dilation is better expressed as a feature of measurements compared between reference frames.

The ontological account of time most consistent with relativity is that offered by Ruggiero Giuseppe Boscovich (1711-1787) over a century before Einstein, namely that time is purely relational, and non-extensive. As noted earlier, the fourth dimension of “spacetime” is extensive only by virtue of converting time into distance. We now see that the notion that time itself is really an extension breaks down in relativity, since that extension has no definite magnitude, so time would have no determinate existence if it were mere extension.

Under relativity, the magnitude of *spatial* extension also seems to be relative, due to the phenomenon of Lorentz contraction, where the length of an object appears shorter than its rest frame length by the Lorentz factor γ. Yet Lorentz contraction is a consequence of there being some time elapsed from receiving signals from opposite ends of an object, and not being able to determine exactly when the signal bounced off of each end. So Lorentz contraction is just another aspect of the problem of simultaneity, and does not require us to abandon the notion of space as extensive.

The non-existence of simultaneity at a distance implies the non-absoluteness of spatial measurements, expressed by Lorentz contraction. If A seems shorter than B from B’s rest frame and B seems shorter than A from A’s frame, there is no contradiction, since there is no “real” absolute length, not even the proper length (i.e., rest length), since rest is not absolute. Each frame must be considered on its own terms.

Supposing that we admit with Boscovich that time is purely relational, while extension belongs only to space, can the reality of spatial extension be preserved? It makes no sense to speak of a “rest frame” for some object of definite size, unless we can assume that the entire object exists at the same time. Nothing extensive is ever truly at rest, since each part has its own time flow, and it is only by supposing each part to be in time that the object as a whole can persist, effectively being treated as something that exists “at once.” In other words, contrary to Aristotle, it is only by virtue of being *in process* that an extensive quantity of substance can exist, not by virtue of being abstracted from time (i.e., as a static essence). This points to a flaw in Aristotelian ontology that was corrected by Thomas Aquinas, who introduced a distinction between essence and existence (*esse*, “to be”), the latter being more akin to act or process.

Special relativity forces us to accept that the *magnitude* of a time interval depends on our choice of reference frame no less than the motion it measures. Time, of itself, is mere succession without a measurable quantity, much like Bergson said. The “proper time” of an object’s rest frame cannot serve as an absolute measure, since no such frame is physically privileged, though it does give us a minimum duration. If we want to think of proper time as grasping what we subjectively understand by duration, then relativity teaches that time is subjectively or locally experienced. It is only when we try to quantify it, making it correspond to some distance traversed, that we get unintuitive results.

Relativity does *not* abolish the objectivity of time as succession, at least not locally. For every physical event, there is an absolute past and absolute future that is the same in all reference frames. This preserves the succession of causality, where a cause cannot be temporally posterior to its effect (though they might be simultaneous). Still, the relativity of simultaneity implies that there is generally no unique order of temporal priority for two distant events where neither can effect the other.

The invariance of causality and relativity of temporal priority are best understood with the aid of spacetime diagrams developed by Minkowski in 1908. For simplicity, we show only one dimension of space (*x*). Each spatiotemporal point is called an “event,” though strictly speaking, there could be more than one physical event at a given point. Besides, all observable physical processes or events, no matter how simple, involve some spatial extension and temporal duration, so their representation as points is only an approximation.

Since it is a feature of relativistic dynamics that nothing can travel faster than light,[1] and physical causation is assumed to be mediated by direct contact, it follows that all possible effects of an event at (0,0) are situated within a cone limited by the trajectory of light, *x* = *ct*. Similarly, all possible causes preceding the event must be within a negative “light cone.” The upper and lower light cones define the absolute future and past with respect to that event. They are “absolute” in the sense that they are the same for all reference frames, yet they are still defined relative to a determinate event (which is true even classically). The region outside the light cone, denoted “elsewhere,” cannot communicate any signal to or from the spacetime coordinates of the event in question. Events in this region are effectively “elsewhere” and cannot be synchronized with our event, so any attempt to bring them in a common chronology (e.g., by reference to shared effects or causes) will result in an equivocal order of temporal priority.

The power of Minkowski diagrams is in their ability to represent more than one frame of reference. A second set of axes (grey lines), deviating from the first by a slope equal to the relative velocity *v* divided by *c*, allows us to construct an alternative set of coordinates (*x', ct'*) for all other events (e.g., point B), depicted by a parallelogram. The spacetime “distance,” depicted as the length of the interval between event points (dark red diagonal line AB), is invariant. This enables us to calibrate the scales of the two reference frames, using the hyperbola *x*^{2} - *ct*^{2} = 1. The unit scales of the second (grey) set of axes are *larger* than those of the (black) perpendicular axes in the diagram.

In each reference frame, a line parallel to the spatial axis is a line of simultaneity, in which all events are defined to occur at the same time. A line parallel to the time axis represents the same point in space in the frame. If we extrapolate both kinds of lines from the same event in two reference frames (second diagram), we find—after adjusting for scale, so visual inspection does not suffice—that the time elapsed and spatial distance between events differs by reference frame. The only invariant quantity is the spacetime interval “distance” (AB).

An interval within the light cone is said to be timelike, since the two events have an unambiguous order of temporal succession, making possible a causal relationship. Intervals outside the light cone are called spacelike. The termini of such intervals cannot be related in direct temporal succession, so they must be separated primarily by space. The boundaries of the light cone are the same in all inertial reference frames, due to the invariance of light speed. This means that the classification of a spacetime interval as timelike or spacelike (with respect to a given origin A) is the same in all frames.

Although Minkowski diagrams provide a convenient shorthand for representing possible causal relationships, we should keep in mind that causality and spatiotemporality remain distinct physical concepts. Two events may be connected by a timelike interval without having an actual causal relationship, and it is possible for a cause and effect to have the same spatiotemporal point.

Another danger of such diagrams is that they may give the impression that it is possible to define at once a universal frame of reference across all space and time. In physical reality, reference frames are defined locally, and then extrapolated globally to other events only by taking measurements of signals using Einstein’s convention for synchronization.

Events separated by spacelike intervals have no unique order of temporal priority. If we travel to Alpha Centauri, we can certainly know that our arrival there (B) takes place after our departure from Earth (A), but it does not follow that we can say anything about the relative priority of our departure (A) and an event (C) on Alpha Centauri that occurred long before our arrival (B) in its rest frame. As shown in the third diagram, C may occur after A in the Earth’s rest frame, while in another equally valid frame (*x', ct'*), C occurs before A. This relativity of temporal priority need not trouble the physicist, as it results in no causal paradoxes, since it is impossible for there to be any causally linked chain of physical events between A and C.

Even if we assume that all events have a common causal root in the origin of the cosmos, making time numerically one, absolute simultaneity ceases to be preserved once things move out in different directions. We may think of each galaxy or each star as having its own parallel timeline with its own internal timescale. Yet relativity of simultaneity applies even on continuously smaller scales, though it is only noticeable at large distances. This means that we should view cosmic time not as a discrete number of timelines, but as a fluid time-stream spreading in different directions, much as when we pour water onto a table, making a puddle that flows outward in all directions at various speeds. This analogy is intended to show the continuity of variation, not to suggest that time can be viewed all at once, with unequivocal rates at each location.

Although we can define a “proper time” for each rest frame, this should not be taken as an absolute time. After all, if some time *t* seems to intervene between events A and B in our rest frame, it will seem to take a longer time γ*t* from another frame, yet in that second frame, a physically similar movement bounded by events C and D would be separated by time *t*, which in the first frame would seem to take γ*t*. In short, in the first frame *t*_{D} - *t*_{C} > *t*_{B} - *t*_{A}, while in the second frame *t*_{B} - *t*_{A} > *t*_{D} - *t*_{C}. There is no single right answer to how long the physical process defined by AB or CD takes. “Proper time” gives us a minimum duration, showing that the passage of time is physically real, but does not justify ascribing a single objective quantitative measure to time, as that measure depends on relative velocity.

Proper length is defined as the length of an object in its rest frame. In any other frame it will have a shorter length, since the measurement of its length involves some lapse in time. This means it is no longer a purely spatial measurement, as it has some of the time dimension involved. Likewise, nonproper time measurements require us to take distance (traversed by signals) into account, so they have a spacelike aspect. Proper length is a maximum, and nonproper length has no minimum, giving the impression that things seems spatially distributed only to the extent that they may be treated as at rest. Since rest is not absolute, neither is the magnitude of length. Consider two parallel rods, AB and CD, in relative motion along their length. While AB seems longer than CD in AB’s rest frame, CD will seem longer than AB in CD’s rest frame.

These considerations make it impossible to conceive spacetime merely as an augmented space, with definite lengths and durations. We must take into account the metric interdependence of space and time, which is a mathematical consequence of the velocity-dependent Lorentz transformation. Although Minkowski diagrams may give the misleading impression that spacetime is something we can visualize all at once, spread out in all directions, we must keep in mind that they use local coordinates defined with respect to a particular event.

Spacetime diagrams enable us to define a “world-line,” which is a curve consisting of all the spacetime loci of an object throughout its history. World-lines are timelike curves, and all timelike curves can, at least in principle, be the world-line of some object. The time axis in a Minkowski diagram may be regarded as the world-line of an object at rest in a given frame. The length of the world-line is the proper time in that frame.

If we follow a moving object along a curved world-line, the slope of its rest frame’s time axis, tangent to the world-line, changes as the object changes velocity. Reserving treatment of acceleration to general relativity, we may at least construct a succession of rest frames along the object’s world-line.[2] The variation in slope of the time and space axes makes clear that there is no absolutely pure time interval or space interval, but these depend on choice of reference frame. Still, the invariance of light speed guarantees that the light cone boundary of *x* = *ct* will distinguish spacelike and timelike intervals absolutely.

The above considerations may lead us to think that there is no real distinction between space and time, but rather this depends on perspective, in much the same way that electricity and magnetism are interrelated. As discussed, however, this relativistic interdependence pertains only to the *magnitudes* of space and time, and does not abolish the real physical and ontological differences between spatial and temporal dimensions, especially as they pertain to causality.

The relativity of spatial and temporal magnitude derives from the relativity of motion. If the fact of whether something is in motion or at rest depends on reference frame, it is perhaps not too surprising that the perception of time duration should also be perspective-dependent. Time is the measure of motion, so in reference frames where no motion is perceived, neither can we perceive time. Again, a relationist account of time is upheld. Length is also relative, insofar as measurements at two ends of an object require some lapse in time.

Photons exist along the light cone in “null intervals” (where *s*^{2} = 0), which are neither timelike nor spacelike. It is accordingly said that photons are timeless; i.e., that no time elapses for them. They are intrinsically unchanging, yet their existence is bounded by the events of emission and absorption. Thus their mode of temporal existence is akin to what medieval philosophers called aeviternity, which is duration without succession. (See Sec. 13.5.)

Less mentioned but no less true is the fact that photons are spaceless, i.e., they have no change in place. This would mean light somehow has location without succession, not being first here, then there, etc., but occupying its whole path at once. In the geometry of Minkowski spacetime, travel at light speed is effectively instantaneous. This agrees with classical intuitions about optics, except this no longer entails the paradox of infinite speed.

These bizarre results arise from the incoherence of specifying a “rest frame” for a photon, which is postulated to move at speed *c* in every inertial frame. Thus a photon is never at rest from a temporal perspective. If we naïvely defined such a rest frame, the Lorentz factor γ would equal 1/0, so all space and time intervals would be undefined there. This does not prove anything, but at least suggests that the photon in some sense transcends space and time as we experience them.

The invariance of *c* ensures that the laws of physics have the same form in every inertial frame. The only way we can know a frame is inertial is if at least one object in it is *at rest*. The supposed photon rest frame fails this criterion, leading to confused results. Still, states of rest in general retain physical significance, even though they cannot be defined absolutely.

While the treatment of space and time as a single four-dimensional entity is not necessary to explain special relativity, it is practically indispensable to general relativity as formulated by Einstein in 1915. In this theory, spacetime is conceived as a four-dimensional manifold with a metric that is non-Euclidean even in its spatial components for any cosmos with a non-zero mass-energy distribution. The geodesics or straightest possible contours on this curved manifold correspond to the trajectories of gravitationally falling bodies. In other words, the phenomenon we observe as gravitational force is really a feature of the curvature of spacetime. Explained physically, the presence of mass bends or curves spacetime, and this curvature in turn affects the trajectories of massive bodies and radiation.

The motivation for this bizarre way of looking at spacetime, which seems to treat it as a pliable substance, is philosophically similar to that of special relativity. Space and time, for Einstein, can be defined only by measurements from the perspective of some observer. In special relativity, that meant choosing a frame where some observer is at rest. Observations in any such frame always agree with the same force laws of electrodynamics, which can be true only if the inertial motion of the observer is presumed to be independent of electromagnetic force interactions. With gravitation, however, the story is different. No observer with a definable rest frame (which excludes photons) is unaffected by gravity. Instead of a transformation that compares two frames differing only by some relative velocity *v*, as though one frame were at rest and the other in constant linear motion, our comoving observers must at least be subject to freefall gravitational acceleration. By treating such freefall trajectories as geodesics of spacetime, one can give a mathematically elegant account of gravitation that preserves the equivalence principle (i.e., that all bodies fall the same way in a gravitational field).

Einstein’s choice to incorporate gravitational effects into the spacetime metric itself was not the only logical possibility, but it is best suited to his empirical, perspectivist account of space and time. We could postulate, on the contrary, that general relativity is just a mathematically convenient way of dealing with gravity, while physical space and time are not really curved. Einstein might retort that it is senseless to speak of a physically “real” flat spacetime that can never be observed or experienced as such. As we have admitted at the outset, physics deals with the sensible world, so any supposed feature of spacetime that cannot be observed even in principle should be considered outside the domain of physics.

While allowing the theoretical possibility that metaphysical space and time could differ in structure from the metrics we construct in general relativity, we confine ourselves to the latter as being exclusively relevant to the *physical* aspects of space and time. If we are going to take this position, however, it is incumbent upon us to give a coherent *physical* account of “curved” spacetime, not merely a formal mathematical description.

Einstein’s physical account of general relativity was based on the oft-confirmed experimental observation that the ratio of inertial mass to gravitational mass is the same for all bodies in all conditions. From this follows an equivalence principle whereby the gravitational acceleration of a body is indistinguishable from the acceleration of its reference frame. Einstein posited that inertial mass and gravitational mass are one and the same thing, and thus gravitation holds the function once attributed to inertia, i.e., resistance to acceleration. This requires us to reconceive gravity, viewing it not as an accelerating force, but as defining the “straight-line” paths or geodesics for bodies. In this view, there is no force of gravity pulling me toward the ground, only the normal force of the ground surface opposing my “inertial” tendency to freefall. I use the term ‘inertial’ equivocally here, to show that gravitation is ascribed the dynamical function once attributed to inertia. (Classical inertial straight-line paths are recovered in the limit as gravitational effects approach zero.)

To understand how the phenomenon of spacetime curvature is accomplished physically, it is necessary to discard the notion that there is such a thing as “gravity” as an independent physical existent. Rather, there is only the dynamic mass-energy distribution of the universe, from which we can construct a stress-energy-momentum tensor *T _{ab}*, which has a distinct value at each point in spacetime. Since spacetime approximates Minkowski spacetime on sufficiently small scales, we may define at each point

R- (1/2)_{ab}Rg= (8π_{ab}G/c^{4})T_{ab}

This equation may be evaluated for each of the sixteen components, but due to the symmetry of the tensors, only ten components have independent values, yielding ten equations of this form. *G* is the Newtonian gravitational constant, and *c* is the speed of light. *R _{ab}* is the Ricci tensor (a contraction of the four-index pseudo-Riemannian curvature tensor), which measures the aspects of curvature that affect volume, contracting or expanding each dimension, so that the geodesics converge or diverge.

This leaves *g _{ab}*, which is the pseudo-Riemannian spacetime “metric” tensor, a generalization of the Minkowski pseudometric that allows for non-Euclidean curvature. As the Minkowski pseudometric was for special relativity, so is

ds^{2}=g(summed over_{ab}dx^{a}dx^{b}a, b= 1 to 4)

This function is in differential form, showing the curvilinear shape of the metric in the vicinity of some point. We cannot extrapolate this metric to macroscopic paths, since the curvature of spacetime may vary from point to point. Further, when we allow for curvature, it can no longer be assumed that there is a single interval between two points with a unique “distance” *s*^{2}, as in Minkowski spacetime. The distance between two points will depend on the choice of path, though the distance along each path is a coordinate-invariant quantity.

Spacetime can no longer be represented as a vector space once we allow curvature, since there is no uniquely definable vector addition on curved manifolds. It is possible, nonetheless, to define directional derivatives, i.e., the rate of change of a vector function in a given direction on the manifold. When a manifold has a metric or pseudo-metric structure (or equivalently, a linear connection), we can define a generalized directional derivative known as a covariant derivative, which is independent of coordinate system. This will allow us to define a generalized concept of “parallel” that depends only on the intrinsic structure of the manifold and functions upon it, without treating it as embedded in some higher-dimensional Euclidean space.

To understand what is meant by “parallel transport” of a vector, we must first keep in mind that a vector function defined at each point on a curved manifold cannot be simply correlated with each point’s coordinates. It is a quantity in addition to the position data. We may consider a vector function to be “tangent” to the manifold, first in the visualizable sense of an arrow tangent to a curved surface embedded in Euclidean space, then more generally in terms of the directional derivative defined at a point on the manifold.

Parallel transport of a vector (of constant length) along a curve within a manifold is accomplished by keeping the covariant derivative zero. The vector remains tangent to the manifold, and keeps the same orientation with respect to the curve. With a curved manifold, however, it is not possible to use parallel transport to define a universal standard of parallelism. The final orientation of the transported vector depends on the choice of curved path, and so may have a different orientation after being transported on a closed loop.

4-velocity is a four-dimensional tangent vector on a world-line. When it is parallel transported, it may be considered as simply following the contour of spacetime, and its rate of change or 4-acceleration is normal to spacetime. In other words, it is not accelerating within spacetime. The covariant derivative measures the degree to which the vector deviates from what it would be if it were kept tangent to the curved world-line. This deviation may be thought of as proper acceleration, i.e. a change of velocity not attributable to the geodesic. Travel along the geodesic is gravitational acceleration (“inertial” freefall).

Interpreted physically, the distribution of mass-energy affects the dynamical tendency of a body, i.e., what its minimal trajectory would be. This would mean that the general distribution of mass in the universe affects the “inertial” dynamics of a particular body, roughly akin to “Mach’s principle,” which to Einstein meant “[Ernst] Mach’s requirement that the inertia of a mass must be traced back to the interaction of the body.” [A. Einstein. “Prinzipielles zur allgemeinen Relativitätstheorie;” *Annalen der Physik* (1918), 55, 16. Orig. in *Ann. Phys.* (1918), 53, 130.]

An alternative formulation would be that the mass-energy distribution deforms “spacetime,” as though the latter were a four-dimensional substantial entity, and bodies move along its geodesics. Yet the field equations do not require us to suppose that there is a substantial “spacetime” manifold that is independent of bodies; in fact, they cannot even uniquely specify such a manifold. Nonetheless, Einstein did consider the spacetime *metric* as a physical property that is distinct from the mass-energy distribution (as shown by distinct terms in the field equations), so that there is an action-reaction dynamic between mass-energy and the spacetime metric. Yet even this interpretation is not strictly required for a coherent physical account of general relativity. The invariance of the spacetime metric and the universal applicability of the field equations could be understood in terms of all bodies responding in the same dynamical way to the mass-energy distribution of the universe, with the spacetime metric merely being a measure of this dynamic. The spacetime metric defines freefall geodesics, i.e., a generalized “inertial” motion that is explained by body interaction, as in Mach’s principle. Thus there is no strict need so far to posit spacetime as something ontologically independent of mass-energy, though it is distinct from the latter.

Regardless of which of these interpretations we choose, the introduction of curvature into the spacetime metric has important physical implications. Once we have curvature, it is no longer generally possible to construct global inertial frames by a synchronization scheme, as in special relativity. Any synchronization system would now become path-dependent, since spacetime curvature may vary by direction. This breakdown is expressed mathematically by the fact that spacetime can no longer be represented as a vector space. Physically, this means that a given observer cannot define a unique value of the 4-momentum (composed of 3-momentum and *E/c*) of a distant object, since that depends on the signal paths of our successive measurements.

Since the spacetime metric depends on the presence or absence of mass, solutions to the Einstein field equations can have radically different forms depending on mass-energy distributions. In “vacuum” regions that have no mass-energy, the stress-energy tensor vanishes, allowing solutions called Einstein metrics, which have constant 3-dimensional spatial curvature everywhere, independent of the time dimension. Such solutions naturally include Minkowski spacetime, but also allow uniform spherical or hyperbolic curvature. The latter two forms of curvature formally cause the volume of a body to deviate from its Euclidean analogue, and the magnitude of this deviation for a sphere is measured by the Ricci curvature tensor, which in the case of constant curvature is simply a scalar multiplied by the spacetime metric. Even in vacuum solutions where the Ricci tensor is zero (as when the cosmological constant or vacuum energy density is zero), there may still be Weyl curvature (deformation of shape without change in volume), representing the propagation of gravitational tidal forces.

The admission of curvature in vacuum regions does not contradict the supposition that spacetime curvature is ontologically dependent on matter. What we are calling a vacuum is really just a region where there are no gravitational sources, not where there are no gravitational effects. Any physical field theory that allows for interaction between spatially separated sources will have non-trivial vacuum solutions. Fields themselves are treated as physical entities that can interact with each other, generating effects far removed from their sources. With classical fields, we can use scalar potentials and gradients to describe vacuum conditions, but in general relativity, it is necessary to use tensors and covariant derivatives to account for the curvature of the spacetime metric.

Another important solution to the Einstein field equations is the so-called Schwarzschild solution,[3] which gives the metric of spacetime outside a non-rotating spherical mass (assuming a cosmological constant of zero). This is a useful approximation of the solar system, since the sun’s rotational effect is negligible at planetary distances, and the planets are relatively small enough to be treated as point masses. Geodesics in the Schwarzschild metric are the world-lines of planets moving in their orbits, which include a precession not predicted classically.

In any non-trivial solution to general relativity, the spacetime “distance” between two events depends on the choice of path, but this distance has a unique, invariant value for each path. As in special relativity, we can categorize these paths as timelike (having negative “distance squared” *s*^{2}) or spacelike (positive *s*^{2}). Timelike curves are still called world-lines, since they can represent the lifespans of real physical objects. Geodesics in spacetime are invariably timelike or null (zero distance), the latter being the paths followed by light.

The introduction of curvature allows at least the theoretical possibility of closed timelike curves, where a world-line returns to its starting point and continues in an endless causal loop. This may exist without paradox only if it is not causally connected to the outside world. Although such a loop may be internally self-consistent without any grandfather paradox, this would not make it self-explanatory, as you would still have to account for why the loop as a whole exists. Likewise, if a cause is simultaneous with its effect, this is not the same as self-causation. Such confusion arises when we mistake spatiotemporal points for events, and too strongly identify spatiotemporal relations with causal relations.

Spacetime curvature and path-dependence seem to fly in the face of the post-Copernican assumption that there are no physically preferred places. Accordingly, we need to clarify the concepts of spatial homogeneity and isotropy in the context of general relativity. First, we should emphasize that these are assumptions, not definitively demonstrable, though they have been validated to excellent approximation from a terrestrial perspective. Second, general relativity does not require homogeneity or isotropy, but these apply only to certain classes of solutions (exact or approximate).

As a preliminary, we need the concept of an isometry, which is a symmetry transformation of the spacetime metric, i.e., a diffeomorphism (necessarily bijective) that maps the spacetime metric tensor onto itself. This allows one to define homogeneity precisely as follows: “A spacetime is said to be (spatially) homogeneous if there exists a one-parameter family of hypersurfaces Σ_{t} foliating the spacetime such that for each *t* and for any points *p*, *q* ∈ Σ_{t} there exists an isometry of the spacetime metric, *g _{ab}*, which takes

In other words, we can take a timelike curve or world-line and construct a series of 3-dimensional hypersurfaces normal to the world-line at each of its points, and there is a function that can map any point on such a hypersurface to any other point on that hypersurface *while preserving the distance relation for all pairs of points*. Simple examples of such functions include rotations and reflections. In general, the existence of such an isometry indicates that the spacelike aspects of spacetime are geometrically uniform. This is sometimes described by saying that space “looks the same” everywhere, though we are not referring to what is directly observable, only to the spacetime metric. Homogeneity so defined implies that space has constant curvature everywhere (at a given proper time), and that no region is more contracted or expanded than another.

The related notion of isotropy, where space “looks the same” in any *direction* from a given point, may be formally defined as follows:

A spacetime is said to be (spatially) isotropic at each point if there exists a congruence of timelike curves (i.e., observers), with tangents denotedu, filling the spacetime and satisfying the following property. Given any point^{a}pand any two unit “spatial” tangent vectorss^{a}_{1}ands^{a}_{2}∈ V_{p}(i.e., vectors atporthogonal tou), there exists an isometry of^{a}gwhich leaves_{ab}panduat^{a}pfixed but rotatess^{a}_{1}intos^{a}_{2}. [Ibid., p. 93.]

More succinctly, any “spacelike” unit vector perpendicular to a world-line at point *p* can be mapped to any other such unit vector by an isometry of the spacetime metric. This means that all directions “look the same” in terms of spacetime curvature. This implies constant curvature with respect to *p*. Unlike the case of homogeneity, a spacetime that is isotropic with respect to *p* might be stretched or contracted to different degrees at different spatial distances from *p*, but this must be radially uniform in all spatial directions from *p*.

If space is isotropic from the perspective of three different points, it follows that it is isotropic from any point. If the spacetime metric is non-spherical (i.e., flat or hyperbolic, due to the constant curvature requirement), then only two points are necessary to establish universal isotropy. Isotropy at every point logically implies homogeneity, but the reverse is not true. We could have a universe that is homogeneous but anisotropic.

The universe could be homogeneous *in sensu strictu* only if mass were evenly distributed throughout the universe, since only this would give constant curvature in every locality. When cosmologists speak of the homogeneity of space, they mean only *cosmic-scale* homogeneity, since it is obvious that space is inhomogeneous (and therefore anisotropic) on smaller scales, giving us the gravitational effects of structures from planetoids to galaxy clusters.

Local inhomogeneity and anisotropy (such as that of the solar system) resemble the Aristotelian notion that bodies fall to a preferred place, but with important distinctions. First, the center of the Earth is not the only such place, but one of many. Second, it is not place as such, abstracted from matter, that establishes non-homogeneity, but rather the presence or absence of matter defines the varying geometry of spacetime.

Most cosmologists believe that the universe is very nearly homogeneous on the largest scale (the Hubble horizon), and so slightly curved that the radius of curvature is well over 100 billion light years. If this curvature is negative (hyperbolic) or zero (flat), then the universe would be spatially unbounded. As discussed previously, it is not problematic to regard space as infinite as long we regard this as a *potential* infinity, with the cosmos being spatially finite at any given time. An unbounded universe with non-zero (negative) curvature, however, seems problematic, if we regard that curvature as existing even at infinite distances removed from any matter to account for it. We should keep in mind that we cannot really say anything physical about space beyond where matter has yet traveled, and Thus it may be said that the curvature of spacetime extends infinitely only in potentiality.

If the curvature is positive (spherical), spacetime is finite in extent. This does not necessarily imply that time is finite, since we could have world-lines that are infinitely recurring loops or asymptotically slowing down in time. World-lines in spherical spacetime need not terminate like lines of longitude, which belong to but one of many possible coordinate systems.

The universe is believed to be isotropic to good approximation on a Hubble scale from the Earth’s vicinity, based on the uniformity of the cosmic microwave background (CMB) and observed galactic evolution in all directions. Since we are only concerned with large-scale isotropy, we would need to take measurements from a point far distant from our galaxy to confirm that this isotropy is universal. Failing that, cosmologists merely assume that the Earth is not in a geometrically special location, as seems consistent with its unremarkable position within the galaxy and its cluster. Still, there is some non-negligible hemispheric anisotropy.

The apparent facts of cosmic-scale homogeneity and isotropy are generally interpreted to mean that local inhomogeneities and anisotropies arose after the origin of the universe. In other words, the spacetime manifold of itself is homogeneous and isotropic by nature, but only post-origin fluctuations (e.g., random quantum events) introduced irregularities. If we take the view that spacetime is ontologically dependent on physical substance, this means that the substance of the universe was initially homogeneous.

Einstein recognized that physical laws would remain coordinate-invariant even if an arbitrary constant were added to spacetime curvature everywhere, so another term can be added to the field equations:

R- (1/2)_{ab}Rg+ Λ_{ab}g= (8π_{ab}G/c^{4})T_{ab}

Einstein’s motivation for including a “cosmological constant” Λ was to make possible a steady-state universe, in which homogeneous, isotropic space with positive curvature and a specific value of Λ = 4π*G*ρ (where ρ is mass-energy density) maintains a constant spatial metric over the unbounded (potentially infinite) proper time of isotropic observers, which are thus always the same spatial distance apart.[4] This agreed with astronomical observations showing that stars in the galaxy maintain constant distances from each other.

The introduction of a non-zero cosmological constant, however, would mean that spacetime has some constant curvature by default, distinct from the curvature contributed by matter. From the magnitude of this uniform curvature, one could compute an effective energy density of the vacuum, which can take on negative values when Λ is negative. This need not mean that the vacuum really has positive or negative energy, only that spacetime is curved as though it did. Further, it is not necessary to posit that spacetime itself is the source of the cosmological constant’s contribution to the metric.

Willem de Sitter (1872-1934) showed that an empty universe (where the Ricci tensor is zero, making spacetime flat) with a positive cosmological constant would give the cosmos a 4-dimensional positive constant curvature. This non-trivial cosmic curvature, while mathematically permitted by the field equations, lacks a definite physical explanation. If it exists, it would be a cosmic feature that is independent of mass-energy distribution. The early universe is now believed to have been a de Sitter universe to good approximation as Λ then dominated over the matter contribution. Yet the value of Λ calculated from modern observations is much too small to give us a steady-state universe, having an upper bound of about 10^{-52} m^{-2} (or 10^{-122} in Planck units).

If the cosmological constant is zero or negligible, spatially homogeneous and isotropic solutions to the Einstein field equations will have the metric scale of space expand over time, assuming that the relative velocities of matter are small compared to *c*. Such solutions have the metric form:

ds^{2}= -dt^{2}+a^{2}(t)dΣ^{2}

These solutions, now known as Friedmann-Robertson-Walker (FRW) metrics, can be modeled with a convenient choice of global coordinates, made possible by the assumption of spatial homogeneity or constant curvature. As discussed previously, we can foliate such a spacetime with three-dimensional spatial hypersurfaces (Σ_{t}) that are normal to the world-lines of isotropic observers (which define proper time). Such hypersurfaces may have positive, negative or flat curvature.

There is no general analytic solution for the scale function *a*(*t*), proportionate to the universe’s spatial radius of curvature, but it can be approximated numerically from observed galactic redshifts. In 1929, Edwin Hubble (1889-1953) showed that redshifts were proportionate to the distances of galaxies as estimated from their luminosity. From these redshifts, one could calculate an “apparent velocity” on the assumption that this was caused by recession of the source, as in the Doppler effect. Georges Lemaitre (1894-1966) had realized in 1927 that this linear relationship between apparent recessional velocity and distance is exactly what is predicted by FRW solutions with an expanding metric.

When we say space expands, it is really *ds*, the pseudometric distance, that increases in scale, though this increase is driven solely by the spatial component of the metric. Four-dimensional “distance” is an invariant quantity, yet space and time remain frame-dependent, so we cannot simply say that 3-space has expanded so many light-years over time unless we specify a reference frame. When cosmologists say that space is expanding 74 __+__ 3 km/sec/megaparsec, they are selecting a particular class of coordinate systems, namely using the proper time of a so-called co-moving or isotropic observer.

It should be emphasized that we have not abandoned the relativity of simultaneity, for we are choosing a particular reference frame:

…in our neighborhood, there exists a preferred choice of time, whose hypersurfaces are homogeneous and isotropic, and with respect to which [the metric expansion v = Hd] is valid in the local inertial frame ofanyobserver who is at rest with respect to these hypersurfaces atanylocation… [Bernard Shutz,A First Course in General Relativity, 2nd ed. (Cambridge Univ. Press, 2009), p.339.]

Only so-called comoving observers, who have the same freefall motion as the Hubble flow (the expansion of space) and no other motion, find space to be isotropic. Conveniently, the cosmic microwave background (CMB) provides a point of reference for the Hubble flow, so anyone comoving with the CMB (i.e., to whom the CMB seems basically the same in all directions) qualifies as a comoving observer or “isotropic observer.” Such observers are not in a rest frame as in special relativity, for they are in freefall acceleration, following geodesics of spacetime. We are not comoving observers, since our planet, sun, and galaxy each have their own “peculiar motion” with respect to that of the CMB. Nonetheless, these additional motions are measurable, so we can account for them.

By the *assumption* of spatial homogeneity, all isotropic observers agree on the time difference between hypersurfaces, so the proper time scale is the same for each observer’s world-line. The proper times of each observer’s frame could be synchronized based on measurement of the density of the CMB. This apparent re-establishment of universal time does not abolish the relativity of simultaneity, since this coordinate system is not physically privileged over any other. Although all comoving observers may agree on how much time elapses between changes in CMB density, the same paradoxes of simultaneity would hold between spacelike-separated events. Nonetheless, the possibility of defining a cosmological time implies the numerical unity of time.

Time is numerically one if there is a cosmological time, but we do not know for certain if all the world-lines of isotropic observers originated at a single point. It is possible that they converge asymptotically at the “big bang” so that there is no definite first moment. Anyway, quantum effects predominate at this scale, likely confusing the order of causality, but that is beyond our present scope of discussion.

The metric expansion of space means that the spatial distance between comoving observers increases over their proper time. This distance is called “proper distance,” since it is defined in the 3-surface orthogonal to proper time. Proper distance is measured inferentially, since there can be no instantaneous signal transmission along a spacelike surface. It is true that over time there is “more space” between the two observers, in the sense that the distance is greater. This need not mean, however, that space itself has expanded or more space has been created in between the two observers. All we really know is that the geodesics of the comoving observers diverge spatially from each other.

Once we admit that spacetime can have curvature, there is no obstacle to admitting “expansion” with respect to a particular coordinate system. Curvilinear coordinates have scale factors reflecting the fact the magnitude of each coordinate varies at different rates at a given point. We can visualize this easily with polar coordinates, where the distance along a surface in the θ direction increases in proportion to the radial coordinate *r*. Distance increases with the radius of curvature.

As Aleksandr Friedmann had shown in 1922, the radius of curvature of a homogeneous 3-space (and associated scale factor) may vary with respect to the proper time of isotropic observers. This can be most easily visualized in the spherical case, where we treat isotropic world-lines as radii, and view the successive spherical shells as spatial surfaces (with only two dimensions for simplicity). The ratio of these radii of curvature gives a scale factor equal to the ratio of distances. We define the present hypersurface of space to have a scale factor of 1, so proper distance equals comoving distance. In the coordinate system of comoving observers, however, we could keep the comoving distances constant, just as in our illustration the radial world-lines are separated by a constant “theta;, ignoring the scale factor.

If space itself is stretching, it may be wondered how anyone within the universe can measure it. After all, if the length of a meter stick stretches along with space, should we not find things to be the same distance in meters? We must give a physical account of the so-called cosmological redshift. The “recessional velocity” of distant galaxies is computed from the assumption that the observed shift in signal frequency is caused by a Doppler-like phenomenon. Yet this is not a classical Doppler effect, where the source is receding in the rest frame of the observer. Under special relativity, we find another kind of Doppler effect that comes the comparison of inertial frames. There we find that the frequency of a photon varies linearly with the relative four-velocity between reference frames. In general relativity, however, we are using comoving frames, not inertial frames. The cosmological redshift comes from a different consideration.

When dealing with curved manifolds, it is useful to define a Killing vector field, which preserves the value of distances along the metric. In our case of simple polar coordinates. A Killing field could consist of unit vectors on all points on the shell in the θ direction; that is, tangent to the manifold. The Lie derivative of the vector field must be zero with respect to the metric; i.e., its rate of change over the manifold must be the same as that of the metric tensor. If a particle at each point on the manifold is moved incrementally by the Killing vector at that point, all such particles would be displaced by the same amount, so the metric is preserved. The Killing vector field is thus said to generate an isometry.

As discussed by R.M. Wald (*op. cit.*), the projection of the Killing vector on each spatial surface changes with the scale factor over proper time. From this it follows that the frequency of a photon will also change by the same factor, since the product of the Killing vector and 4-velocity is constant along a geodesic (which has null distance in the case of a photon).

This phenomenon is often explained by saying that space stretches while the photon is in transit, stretching it into a longer wavelength. This visualization could be misleading, however, since a photon is not a mechanical wave displacing some extensive medium. Rather, the wavelength is something perceived only by non-photonic observers, as the inverse of photon’s frequency in the observer’s frame. The perceived frequency is based on the perceived time difference between electromagnetic amplitude peaks. For the photon itself, there is no time duration or length of space. Frequency is something that can only be perceived by other observers, depending on their relative timescale. Instead of space stretching out, we say time is slowing down everywhere. Recall that the coordinates system using the proper time of isotropic observer’s is computationally convenient, but not physically privileged. As Friedmann noted, there is no physical or philosophical necessity for such a choice of coordinates. In other coordinate systems, less spatial “stretching” would be perceived. We describe it as stretching only because we have chosen a convenient coordinate system where proper time is orthogonal to homogeneous space. In general, the scales of time and spatial coordinates may be interdependent.

Instead of “stretching,” we might say that space is “flattening” over time; i.e., the radius of curvature is increasing. Yet spacetime curvature is defined solely by matter (when Λ = 0), so the “expansion of space” is really the story of *matter* spreading apart, with this changing cosmic distribution creating a corresponding change in the gravitational (or “inertial”) motion of each object, so that the freefall trajectories move farther apart from each other. Recall that this expansion is predicted by general relativity with no cosmological constant, using nothing other than the stress-energy tensor, i.e. the mass-energy-momentum distribution of the universe, as a source. There is no need to posit “space” or “spacetime” as an additional substantive physical thing that is a gravitational or kinematic source.

Once we understand that the expansion of space is a statement about the divergence of geodesics, not the motion of physical objects in inertial frames as in special relativity, it should become clear that there is no obstacle to such divergence resulting in distance-to-proper time ratios (“recessional velocities”) greater than *c*. We can never observe another object to move faster than the velocity of light in the *rest frame* of any object. In general relativity, however, we are dealing with (freefall) accelerating comoving observers. If spacetime is curved, timelike geodesics may diverge, resulting in comparative motion that is faster than *c* in comoving frames. Recall that *c* in special relativity defines the metric of a flat spacetime. In general relativity, gravitational curvature of the metric can result in the distance effectively traveled along a null geodesic exceeding *ct*, when measured along a curved hypersurface. Since photon trajectories are null even under general relativity, nothing can overtake a photon.

It may seem strange that we should even be able to observe superluminally receding galaxies. To explain this, it is useful to distinguish the cosmic event horizon, particle horizon, light cone, and Hubble sphere. Recall from special relativity that the absolute past and absolute future with respect to a given event can be defined by its light cone, which contains all possible world-lines that pass through that event. That assumed that the geometry of spacetime was flat Lorentzian and static. Once we allow for curvature, however, the null geodesics of photons can be curved, altering the shape of the light cone and introducing path dependence. When we allow for a dynamic geometry, i.e., where the metric of 3-dimensional spatial hypersurfaces changes with respect to the proper time of comoving observers, then distant events may come in and out of the light cone as the universe expands or contracts at varying rates. Thus the range of possibly causally linked events is considerably broader than that of the light cone at a given instantaneous event.

There are two types of limits to the range of causally linked events, both of which rely on the universe having a finite age. These are defined using comoving distance, which treats all comoving observers as having a fixed distance over time, i.e., excluding the effects of metric contraction or expansion. This should not be confused with the proper distance, which is what appears in Hubble’s law, and tells how far away galaxies appear to us.

The cosmic particle horizon is the maximum spatial *comoving* distance could conceivably travel (in terms of having enough time at light speed) in order to reach a given observer (situated at some event after the beginning of the universe) if it had begun travel when the universe began. This represents the absolute maximum range of the observable universe at a given proper time.

An event horizon is the spatial boundary beyond which events cannot affect the observer. In special relativity, this would simply be the light cone. The *cosmic* event horizon is the largest comoving distance from which light *now* (i.e. an event on the same 3-surface as the observer) could eventually intersect the observer’s world-line.

Both types of horizons are measured in terms of comoving distance, i.e., ignoring metric expansion. When we include metric expansion, it turns out that these horizons encompass events from proper distances much further than what could be reached by light-speed signals. This effective superphotonic speed is allowable because of the curvature of spacetime, and the fact that we are comparing measurements between accelerating trajectories. Nonetheless, the signals will be observed to be traveling at *c* locally (in empty regions of space), where cosmic spacetime curvature is negligible. This is only an approximation, however, for the increase in speed (in an isotropic frame) is continuous.

The Hubble sphere is defined as the boundary beyond which galaxies recede superluminally. Such a boundary is definable due to Hubble’s law, whereby the rate of recession is proportionate to proper distance. Since proper distance changes over proper time, the Hubble sphere can expand or contract. If the rate of expansion is not constant, distant comoving observers formerly outside our Hubble sphere can re-enter it, so even superluminally receding galaxies may become observable. Thus the Hubble sphere is not a true event or particle horizon, though it is sometimes called the “Hubble horizon.”[5]

It should be emphasized that the metric expansion of the universe is only on a cosmic scale. Stars, galaxies and clusters typically do not expand with the Hubble flow, which is why early twentieth century astronomers, viewing only stars in our galaxy, thought the universe was in a steady state. This should not be surprising when we consider that the metric expansion is an effect of matter spreading apart. Where matter is locally present in concentration, there should be no cause for local expansion. Metric expansion is a large-scale effect attributable to the dispersion of galaxies across such vast distances that they may be modeled as evenly distributed particles of dust. The recent discoveries of large scale structure or superclusters of galaxies may lead us to modify this model, and even compromise the assumption of homogeneity.

We should keep in mind that the metric expansion occurs only in a particular coordinate system, that of isotropic observers, yet the fundamental purpose of relativity is to give us physics that is independent of choice of coordinates.
The only physical invariant is the four-dimensional distance *ds*^{2}. Further, we should note that it is generally impossible to univocally define the relative velocity between distant objects in general relativity. That would require vector addition and path independence, which is possible only on the assumption of constant curvature. This exists in the frame of isotropic observers, yet this frame is not physically privileged.

Einstein understood gravitation as the presence of massive bodies altering the spacetime pseudometric, which in turn alters the freefall trajectories of masses. Gravity could be considered an action-reaction interplay between mass and spacetime. Yet this seems to make spacetime into a substantive physical entity, capable of changing its form and exerting force on bodies. This is at odds not only with the classical accounts of space and time as accidents, but also with modern relational concepts.

This contradiction might be resolved by supposing space and time to be accidents of a substance beyond the world of sense, so that there is absolute place, though this is not something that affects any empirical observation. Alternatively, as M.J. Chodorkowski suggests (2008), the supposed substance of spacetime may be matter itself. In this view, matter affects matter reflexively, without the intermediary of a substantial spacetime. This would force us to abandon Einstein’s action-reaction interpretation.

Others, such as M.J. Francis et al. (2007), hold an instrumentalist view, noting that the expansion of space is “not a force-like term in a dynamical equation.” Like electromagnetic fields, spacetime is a useful construct, fundamentally unobservable, so it is senseless to argue over whether it really exists or is just matter telling matter how to move.

Still, it is hardly deniable that the metric tensor at least can be considered as some kind of physical field, even if we cannot use it to uniquely specify a manifold. The metric field might be something substantial, as it may, at least in principle, have properties independent of mass-energy distribution. This substantivalist interpretation would agree with discussion of the Higgs field and vacuum energy in quantum mechanics. This does not require us to accept the absurdity that space or the vacuum is a substance, for the “vacuum” is not absolute. On the contrary, it may be considered a plenum field from which particles emerge. (The vacuum field or Higgs field could be modeled as a Bose-Einstein condensate, so it may occupy the same place or quantum state as other physical objects.)

As N. Heberlig (2007)[6] notes, however, it would be extremely problematic to extend this substantivalist interpretation of space to time as well. Thus the formal union of space and time as “spacetime” does not imply that space and time are of the same ontological category. It could be that space (or rather the plenum defining the metric) is substantive, while time is relational. This would require a non-ontological, instrumentalist interpretation of unified “spacetime,” which is not too troubling, considering we have already seen that space and time are not treated equivalently in the construct. Spacetime should be treated as a phase space.

Heberlig refutes two lines of relationist criticism against substantive space. First, he rightly shows that spacetime structure is not reducible to relations of causality. Second, he notes that our inability to specify a unique manifold with our knowledge of the metric does not prove that curved space is not substantive, only that we are epistemically limited. Measurements, he notes, do not exhaust reality, only what we can know about reality.

While it is conceivable that the spacetime metric really does describe some cosmic substratum beyond mass-energy, we cannot pretend that this is demonstrated by general relativity. After all, general relativity takes the view that there are no absolute spatial coordinates. There is no univocal answer to how much of a spacetime interval is “spatial” rather than “temporal.” Thus Heberlig’s attempt to distinguish substantive (spatial) and relational (temporal) components of spacetime seems problematic. The metric is defined relationally (i.e., by the spatial distances and time intervals between events). Even if the metric can be changed by something other than mass-energy distribution, this would not abolish its fundamentally relational definition. It would just mean some other source (e.g., a plenum field) contributes to this relational structure.

Still, we should take to heart the relativistic evidence that space is not simply a relation, for it is not entirely dependent upon the things related. This agrees with our Aristotelian analysis of “place.” (Sec. 11.6) Space is a relational accident of bodies, but it is an extrinsic accident. Attempts to reduce it to the category of substance or relation fail to account for everything we can infer about space from observation and reasoning. Einstein himself eventually came to terms with the fact that general relativity was inconsistent with Mach’s belief that space was purely a relation among bodies. While it is tempting to view this as a return to absolute place, in fact the topology of general relativity shows that we can never know such an absolute structure from physical measurements of place and time. Such an absolute structure, if it exists, would be beyond physics, if we restrict that science to the observable world.

[1] Accelerating a massive particle requires kinetic energy approaching infinity as the velocity approaches *c*. It is mathematically possible to model particles that are always superphotonic, but these “tachyons” would have imaginary mass, act at a distance, and propagate backward in time, resulting in causal paradoxes that can be circumvented only on a microphysical scale (using the Stückelberg-Feynman switching procedure). Since they would accelerate with decreasing energy, it requires tortuous theorizing to prevent them from resulting in runaway reactions. Unsurprisingly, no observation has confirmed their existence.

Granting that nothing can move faster than light *with respect to space*, one might conceivably move a part of space itself within curved space, as in an Alcubierre drive. Yet this is to treat space like a substance, and introduces the absurdity of place having a place. Attempts to circumvent the speed of light limit effectively deny Einstein’s conception of spacetime geometry, where *c* effectively functions as infinite velocity would in classical mechanics.

[2] For example, the so-called “twin paradox” is resolved by invoking two successive inertial frames for the departure and return legs of one twin’s world-line.

[3] What most textbooks call the “Schwarzschild solution” more closely resembles those published by David Hilbert and Johannes Droste (separately) in 1917. It is disputed whether these solutions are fully equivalent to that published by Karl Schwarzschild in 1916, especially as applied to the existence of black hole singularities.

[4] In simple terms, a homogeneous universe with mass > 0 and Λ = 0 would have an expanding metric, though the rate of expansion would decrease over time. Λ > 0 would make the metric contract, but at an increasing rate. The fact that the matter contribution decelerates and the Λ contribution accelerates makes the Einstein steady-state solution unstable. Any slight change in mass-energy distribution could cause runaway expansion.

[5] Tamara M. Davis and Charles H. Lineweaver. “Expanding Confusion: Common Misconceptions of Cosmological Horizons and the Superluminal Expansion of the Universe” Publications of the Astronomical Society of Australia, 2004, v. 21, 99-109.

[6] Nathan Heberlig. “On the ontological status of space-time: Scientific realism and geometrical explanation.” Doctoral Thesis. State University of New York at Buffalo, 2007.

© 2015-16 Daniel J. Castellano. All rights reserved. http://www.arcaneknowledge.org

Home | Top |