Invasion of the Body Scanners

A funny thing happened on the way to the airport: the American public has finally stopped its docile submission to the ludicrous security measures imposed by the Transportation Security Administration, a division of Homeland Security. The last straw apparently is the new “virtual strip search” body scanning being installed by the hundreds in airports throughout the U.S., at a cost of $173 million, giving new meaning to the term “stimulus spending”. Pilots, airlines, and travelers have all raised massive protests against these scans, and a national opt-out day has been planned on November 24. The alternative to the scan is an aggressive frisk, akin to sexual molestation, which serves no real security purpose, but is intended to frighten passengers into assenting the scans.

Some citizens, finding both the scans and the opt-out procedures abhorrent, have called for a boycott on flying. A public policy research center, EPIC, has filed a against Homeland Security, holding that the scans are a violation of the Fourth Amendment, as well as the Administrative Procedure Act, the Privacy Act, the Video Voyeurism Prevention Act, and the Religious Freedom Restoration Act. The last act is cited by a Muslim litigant who holds that the scan-or-grope system is a violation of Islamic modesty. Indeed, it is a violation of any semblance of sexual decorum retained by any culture.

The defense of the scans is predictably crass and utilitarian, with the usual senseless fearmongering about terrorism we are accustomed to hearing from Homeland Security apparatchiks. It is no matter that Americans are far more likely to be murdered by non-terrorists – 15,000 victims a year – but we do not on that account put high-crime areas under lockdown. Terrorism is a convenient excuse for the expansion of the police state. It is not even true that airline terrorism is at its worst in this decade. As airline pilot Patrick Smith points out on Salon.com, the world experienced a horrific sequence of airplane terrorism from 1985 to 1989, yet this did not result in a call for draconian security measures. Even if a 9/11-scale catastrophe happened every year, you would still be more likely to die as a pedestrian in a road accident (over 4,000 deaths per year).

If this were not enough, the scans do not even enhance security as is claimed. Body scans would not have detected the low-density materials carried by the “underwear bomber”, the ostensible reason for implementing the scans. Even if it were so, should we have randomized cavity searches if a single terrorist smuggles contraband in an orifice, as is common practice among prisoners and guerrillas? As Israeli counterterrorism experts understand, effective security requires breaking up the plan before it gets to the airport, or identifying suspects based on behavior. This is why Tel Aviv’s airport doesn’t use the scanners, and a leading Israeli airport security expert has called them “useless”.

That’s right. All the tough-talking TSA fearmongers who talk about needing to counteract “the enemy” aren’t even being effective at doing that. Fortunately for them, their performance can be rationalized as a success no matter what happens. If there are no more successful attacks, this proves that the scans work (“banana in the ear” syndrome), and if there is a successful attack, that will only prove that we need more power in the hands of TSA. Their ineptitude will still be obvious to those who are well informed, and as for their tough talk about the need to sacrifice some liberty in order to win this fictitious war, it cannot mask their craven cowardice, for only a creature somewhat less than a man could allow the fear of death make him seriously consider compromising his dignity.

Cancer: A Disease of Civilization

The only thing surprising about the Villanova study indicating that “cancer-causing factors are limited to societies affected by modern industrialization” is that this is actually considered news. It has long been known that the incidence of cancer is extremely small in pre-industrial societies, nearly non-existent. In order to defend the current culture of emphasizing intervention over prevention, oncologists and others have argued that this is only an artifact of people living longer. The oft-repeated excuse that ancient people rarely lived to be more than 30 is profoundly ignorant and misleading.

When we say that ancient societies had life expectancies of 30 or so, this does not mean that people typically died around age 30. This is an average life expectancy at birth, a figure that is heavily skewed downward by a high infant mortality rate. Those who reached adulthood could reasonably expect to live into their 40s or 50s, and many lived to be over 70. This is why, when you read ancient Greek and Roman texts, such as the lives of the philosophers, there is nothing considered remarkable about living to eighty years old, and it is considered tragic if someone dies in their thirties. The “life expectancy” was much shorter, but the lifespan of man was just as long. People didn’t start having their bodies break down at age 45 just because the average life expectancy was shorter.

The same is true among Native Americans, for which we have a better record, since they lived in technologically primitive conditions until relatively recently. Nonetheless, they were noted for their health and longevity. Plenty of seventeenth century documents attest to the long lives of the Indians in Mexico, for example, where there were many who lived to well over eighty, including some reputed centenarians. The Inuit had no incidence of cancer whatsoever, despite living long enough to develop it (after all, we start screening for cancer at age 40, or 50 at the latest). It was only in the twentieth century that cancer became prevalent among the Inuit, when they adopted modern diet and a sedentary lifestyle.

In the mid-twentieth century, it used to be thought that men having heart attacks at 50 was just something that happens as part of the aging process, but now we know that early heart disease is fully preventable, being a consequence of behavior: diet and lack of exercise. Drug companies and doctors who advocate aggressive interventions downplay prevention, since selling you the cure is much more lucrative. To this day, there is little emphasis on nutrition in medical school, and little serious research into the toxicity of processed food except in simplistic dose-response studies. Our bodies are complex organisms, and there are likely many interaction factors among substances that, by themselves, may appear harmless.

The Villanova researchers do not do themselves any favors when they make the excessive claim that “There is nothing in the natural environment that can cause cancer.” Still, the natural incidence of cancer among animals is astonishingly low, especially for a disease that is supposedly just a natural consequence of living long. Animals kept in captivity, protected from predation, can live their full natural lifespan, so we should certainly expect to see a significant incidence of cancer among them, but we do not. Up until the eighteenth century, cancer was comparably rare among humans. Indeed, when you read older documents mentioning a “cancer,” they are often referring to gangrene or some other degenerative growth. Cancer as we know it was extremely rare, though there were plenty of people who lived to old age.

The real turning point appears to have been in the last few centuries. The hallmarks of societies that have what we now consider “normal” cancer rates are consumption of refined sugars and carbohydrates (following the discovery of the Americas), tobacco use, giving first childbirth later in life, and universal use of toxic pesticides and other pollutants. The sheer abundance of toxins in our food makes it practically impossible to single out a cause in a simple dose-response relationship, which is why countless harmful things, each of which is minimally harmful, are allowed to remain on the market.

The embarrassing fact that many of the technological niceties that are forced upon us ostensibly to improve our lives (but in fact to reduce the cost of production and increase profit) are actually killing us makes a mockery of the notion of science and technology as a sort of earthly salvation. The technocratic establishment expects to be praised for saving us from diseases created through their own uncritical hubris (“Don’t be silly, this won’t hurt you, you Luddite”). For those who are hoping for salvation through medical science, I should remind you that researchers are just flesh-and-blood human beings, many of whom are susceptible to vanity and self-aggrandizement. These sort understand perfectly well that an ounce of prevention is worth a pound of cure, or ten pounds of “treatment”. The medical establishment has judged it is much more profitable to sell you the ten pounds, and it’s hard to argue with their math, as the cancer treatment industry is currently worth over $200 billion.

N.B. Nonetheless, there is some good research that addresses the root causes of cancer and how to prevent it. One especially worthy institution in this regard is the American Institute for Cancer Research.

Cooling the Global Warming Rhetoric

There was no global warming from 2001 through 2009, yet we are now told that this past decade was the warmest on record. Both assertions are factually correct, though they paint very different pictures of the state of the world’s climate.

As we can see in the above graph, 1998 was an unusually hot year (due to “El Niño”) compared with the previous decade, followed by an uncharacteristically cool 1999. Temperatures rose in 2000 and 2001, short of the 1998 peak, yet higher than the average for the 1990s. From 2001 through 2009, the annual global mean temperature anomaly remained about the same, within measurement error. The media likes to publish things like “this was the third hottest year on record,” but such year-to-year rankings are meaningless, as they are distinguished only by statistical noise. Scientists know better than that, yet NASA researchers have often played a media game where they announce that a given year is on track to be the hottest on record. In mid-2005, for example, NASA announced that the global temperature was on track to surpass the 1998 record. Such a distinction is meaningless, since the temperatures were about the same, within the accuracy of the models.

Note we do not measure the average global temperature, but a temperature anomaly, or change with respect to some baseline. In absolute terms, the average surface temperature of Earth is about 15 degrees Celsius, a figure that is only accurate to a full degree. It is impossible to come up with an accurate global mean temperature in absolute terms, due to the lack of weather stations in regions such as oceans and deserts, which magnifies the overall uncertainty. How then, can we come up with more accurate measurements of global temperature change?

First, a baseline period is chosen. A common choice is the period from 1951-1980, which was relatively cool compared to the decades before and after. The graph shown uses 1961-1990 as a baseline, and many current measurements use 1971-2000, since there is more data for this period. For each temperature station or location, we compare the current measured temperature with the average measurement at that station over the baseline period, and the difference is called the “anomaly”; really, simply an increase or decrease with respect to our arbitrary norm (i.e., the baseline). We can measure each local temperature anomaly to the accuracy of the thermometer, and climate scientists maintain that interpolating the anomaly between stations creates little error, since temperatures tend to rise and fall in a region by about the same amount, and the mean temperature anomaly is accurate to 0.05 degrees C. This is more accurate than any individual measurement, due to the error propagation of averages, which adds in quadrature, so the error in the mean is less than the error of any individual measurement.

This assessment of the accuracy depends on the assumption that our weather stations are a fair representative sample of global climate, and that our interpolations over local regions are valid and accurate. Logically, we cannot know the actual global temperature increase or decrease any more accurately than we know the global mean temperature: i.e., within 1 degree Celsius. What we call the “global mean temperature anomaly” is really a weighted average of all measurement stations, which we assume to be a representative sample of the entire globe. Strictly speaking, this is not the same as “how much the temperature increased globally”.

In fact, it is arguable that the notion of “global temperature” is a will-o’-the-wisp. The atmosphere is in constant thermal disequilibrium, both locally and globally, making it practically, and even theoretically, impossible to have a well-defined temperature at any point in time, since the notion of temperature presumes a large-scale equilibrium. Even if we could integrate the momentary temperature over time at a given point, the “average” value we get would not at all be representative of the actual temperature, since more time is spent near the extremes than at the average. This is why we take the maximum and minimum temperatures of each 24-hour period (the daily “high” and “low”) as our data set for each measurement station. There is also the difficulty that surface temperature varies considerably from 0 to 50 feet above sea level, and there is no rigid definition of the altitude at which “surface temperature” should be measured. This introduces the real possibility of systematic error, and makes our assessment of the accuracy of the mean anomaly somewhat suspect.

Let us assume, for the sake of argument, that mean temperature anomaly really is accurate to 0.05 degrees Celsius. Then if 2005 was 0.75 degrees Celsius above the 1950-1980 average, while 1998 was 0.71 Celsius above average, it is statistically meaningless to announce that 2005 was the hottest year on record, since we do not know the mean temperature anomaly accurately enough to know whether 1998 or 2005 was hotter.

All of the foregoing deals with straight temperature records, not adjusting for anything except regional interpolation. However, there are many tricks – not falsifications, but misleading portrayals – one can perform in order to paint the picture you wish to show. For example, you can subtract out natural oscillations caused by things like El Niño, volcanic explosions, and variations in solar irradiance, to show what warming would have occurred without these natural effects. This methodology can be dubious, for you are subtracting natural cooling effects without subtracting potential warming effects that may be coupled to them.

There was dramatic global cooling in 2007, driven by strong La Niña conditions (the decrease in solar activity that year was not enough to account for the cooling). The cooling trend continued in 2008. Although the stagnation in annual mean temperature throughout the decade is still consistent with the hypothesis of long-term anthropogenetic global warming, some climate scientists recognized that this presented bad PR for their views, and reacted accordingly.

In October 2009, Paul Hudson of the BBC (an organization generally sympathetic to the AGW hypothesis) wrote that there was no warming since 1998. This was a fair and accurate presentation of reality, as climate scientists privately admitted. According to the infamous East Anglia e-mails, even the strident advocates of the AGW hypothesis acknowledged that there was a lack of warming. In response to the Hudson article, Kevin Trenberth wrote: “The fact is that we can’t account for the lack of warming at the moment and it is a travesty that we can’t.” Tom Wigley disagreed that the lack of warming was unexplainable, yet even he admitted there was a lack of warming:

At the risk of overload, here are some notes of mine on the recent lack of warming. I look at this in two ways. The first is to look at the difference between the observed and expected anthropogenic trend relative to the pdf [probability density function] for unforced variability. The second is to remove ENSO [El Niño Southern Oscillation], volcanoes and TSI [Total solar irradiance] variation from the observed data.

Both methods show that what we are seeing is not unusual. The second method leaves a significant warming over the past decade.

Wigley is basically saying that if certain natural variable factors were removed, there would have been warming. However, these variations actually did occur, so there was not actually warming. Wigley shows how the lack of warming might be explained by other factors, yet he does not deny the fact that, in actuality, there was no warming.

It is very dubious to play this game of “what would have happened” in climate history, since all atmospheric events are interconnected in a chaotic flux, making it impossible to cleanly subtract out an effect as if it were an autonomous unit. This is why analysis trying to show what percent of warming is caused by which effect is really unhelpful. I have little confidence in the ability of climate scientists to make long-term predictions about chaotic systems with insanely complex oceanic and biological feedback. From my recollections at MIT, those who majored in earth and atmospheric sciences weren’t the most mathematically gifted, so I doubt that they’ve since managed to work such a mathematical miracle. It is hardly encouraging that climate scientist Gavin Schmidt claimed that the Stefan-Boltzmann constant should be doubled since the atmosphere radiates “up and down”! I don’t fault James Hansen for erroneously predicting to Congress in 1989 that there would be a decade of increased droughts in North America. What is objectionable is when such a claim is presented as the only scientifically plausible inference. I know that CO2 is a greenhouse gas, and that it should cause warming, all other things being equal, but all other things aren’t equal.

What do we know, then, if anything? The graph below shows a five-year moving average of the global mean temperature anomaly. (The moving average is to smooth out irregular spikes.)

From this graph, several distinct periods can be discerned with regard to temperature trends.

1860-1900: constant to within +/- 0.1 deg C.

1900-1910: dropped to 0.2 deg C below the 1860-1900 average.

1910-1940: warmed 0.5 deg C from 1910 low, or 0.3 deg C above 1860-1900 avg.

1940-1950: dropped 0.2 deg C from 1940 peak, or 0.1 deg C above 1860-1900 avg.

1950-1975: stable to within 0.1 deg C (that’s why it’s used as a baseline); in 1975, 0.2 deg above 1860-1900 avg.

1975-2000: 0.5 deg warming, or 0.7 deg above 1860-1900 avg.

2001-2009: constant

So, there were thirty years of warming (1910-1940) 0.5 deg, then a cooling/stable period (1940-1975); then another twenty-five years of warming (1975-2000) 0.5 deg. This is quite different from the popular conception of monotonic, geometrically increasing temperature (i.e., “the hockey stick,” which makes clever choices of start date, zero point, and axis scale). There were actually two periods to date of dramatic global warming: 1910-1940 and 1975-2000. This rudimentary analysis already suggests that anthropogenetic effects on climate are more subtle and complex than a simple greenhouse effect.

Looking at this graph, we are right to be skeptical about Arctic ice cap alarmism. We have only been measuring the Arctic ice cap since 1979, which coincides with the start of the last great warming period. We do not, then, have an adequate long-term view of the cap, and cannot reliably extrapolate this extraordinary warming period into the future. Predictions of a one to three meter sea level rise, displacing hundreds of millions people, such as that infamously made by Susmita Dasgupta in 2009, ought to be put on hold for the moment. An imminent climate disaster is always coming but never arrives. Just read the prophecies of imminent doom, flood and drought from the 1980s, or go back decades earlier to read the predictions of a new ice age.

In order to make the current warming seem even more catastrophic, many climate scientists have seen fit to rewrite the past, going so far as to deny the Medieval Warm Period, which is abundantly well established by the historical record. In their perverse epistemology, contrived abstract models with tenuous suppositions should have greater credibility than recorded contemporary observations. There is no getting around the fact that Greenland and Britain had much more inhabitable and arable land than at present. A more plausible hypothesis is that the Medieval Warm Period was not global, yet this too is increasingly refuted by paleoclimatic evidence.

I find that the importance of observed global warming is diminished, not enhanced, by excessive alarmism and overreaching. Short of an Arctic collapse – improbable, since the ice cap is much more advanced now than in the medieval period – we are left with the projection that global temperature may increase as much as a full degree C by 2100, with a probable sea level increase of about a foot: bad, but not cataclysmic. Of course, that doesn’t get you in the top journals or on government panels.

I’ve noticed that in physics, where things really are established to high degrees of certainty and precision, no one ever says “the science is overwhelming”. It is only when the science is not so “hard,” as in the life, earth, and social sciences, that you need to make repeated appeals to authority. When the science truly is overwhelming, the evidence speaks for itself, and no polls need to be taken.

Update – 14 Oct 2010

One of the last of the WWII-era physicists, Harold “Hal” Lewis of UC-Santa Barbara, has raised an animated discussion with his letter of resignation to the American Physical Society, contending that the APS has been complicit in the “pseudoscientific fraud” of global warming, and the Society has suppressed attempts at open discussion. He invokes the ClimateGate e-mails as evidence of fraud, and attributes the lack of critical discourse to the abundance of funding opportunities for research supporting the global warming hypothesis.