How Vaccines Help the Immune System

Much of vaccine hesitancy is grounded in the supposition that one is better off relying on one’s “natural immunity.” This in turn supposes that there is some dichotomy or antithesis between “natural” and “artificial” immunization. In fact, vaccines operate by introducing an inert virus or protein into the bloodstream, so that the immune system can respond and learn to create antibodies. It is actually the immune system doing the “work” of immunization. The role of the vaccine is to introduce a harmless version of the pathogen, so that immunity can develop in this safe environment, and the immune system will be better prepared if the real thing comes. The alternatives would be to (1) hope that one is never exposed to the pathogen or (2) get exposed to the new pathogen and hope that the immune system can deal with it effectively on the first try. The choice is not between “artificial” or “natural” immunity, but between a prepared and unprepared immune system.

COVID-19 is now endemic, so it is a statistical near-certainty that everyone will be exposed to it at some point in their lives. Since it is a novel pathogen, our immune systems are not prepared for it with any specificity. For the unvaccinated, the risk of hospitalization and death varies greatly by age and existing health conditions. Even those who are not hospitalized, however, may suffer lasting “long COVID” effects. These include neurological disorders, respiratory damage, and increased risk of blood clots. Thus COVID-19 presents a substantial health risk to most unvaccinated adults.

You could say that you are willing to assume this substantial risk, or that, in your particular case (e.g., due to young age), the risk is objectively small. It makes no sense, however, to be sanguine about the risk associated with COVID exposure while at the same time being fearful of the risk associated with the vaccine, which does nothing but introduce an inert spike protein into the bloodstream, albeit indirectly. It makes no sense to be fearful of the inert spike protein while having no fear of exposure to the real thing. In fact, all the side effects of vaccines, including the serious effect of blood clotting, are effects associated with COVID-19. This only makes sense, because the inactive ingredients of the vaccine are harmless in their minute quantities, so whatever side effects result would be from the spike protein and the immune response to the same.

The mRNA vaccines (Pfizer and Moderna) work by introducing mRNA into muscle cells, instructing the body to create the spike protein. The mRNA itself, being quite fragile, disintegrates within a few days. The spike protein can remain for a few weeks, as the immune system takes time to develop a response. The Johnson & Johnson (Janssen) vaccine uses a piece of virus DNA (incapable of replicating) with instructions to create the spike protein. This adenovirus method has been in use since the 1970s. The mRNA method, though newly implemented, has been studied for decades. It has not been used previously not because it is unsafe (the mRNA does nothing but code for the inert protein), but because there has been no practical need. The difficulty and cost of storing mRNA is offset by the need to produce vaccines in unprecedented large quantities in a short time.

The COVID vaccines are different from most vaccines only in that they introduce the protein indirectly by genetic instructions, though even this is not truly novel, since DNA has long been used in adenovirus vaccines. Most vaccines operate by introducing the inert pathogen directly. They are not “medicines” or “artificial chemicals,” but pseudo-pathogens introduced to stimulate the immune system to prepare a defense. This is why the side effects of all vaccines are generally similar to the symptoms of the disease to be prevented.

The only lasting products of the COVID vaccines are the antibodies produced by the immune system. The mRNA/DNA disintegrates in days, and the spike protein is gone in a few weeks. These are all “natural” substances that operate according to well-understood biochemistry that regularly occurs in the body.

There is some evidence from Israel suggesting that the immunity (measured in antibody levels) resulting from exposure to COVID in the unvaccinated is greater than that provided by vaccination. Even if this is true, it is not a worthy comparison, for this ignores the substantial health risk involved in being exposed to COVID while unvaccinated. The greater immunity achieved is only subsequent to going through COVID, and it is not possible to know in advance if one will get a severe case or long-term symptoms. It would not be surprising if exposure to the real thing indeed provides better immunity than exposure to a pseudo-pathogen, but this is achieved only after a failure to prevent the disease. The same Israeli study notes that immunity is further enhanced by vaccination following exposure. This finding shows that “natural” and “vaccine” immunity are not antithetical, but complementary.

Early claims about the efficacy of the mRNA vaccines proved to have been overstated, at least with regard to preventing infection. Some of this has to do with the more infectious delta variant, and some has to do with the degradation of immunity levels over time, becoming substantial at six months. A regimen of once or twice annual boosters seems likely. Nonetheless, the vaccines do remain highly effective at reducing severe cases and the long-term health effects associated with these. It would obviously be more prudent to obtain this immunity before one enters the high-risk age group.

In short, without getting into the propriety of legal mandates and the rights of the individual versus those of society, we can see a unilateral prudential benefit to vaccination, at least for adults. All of the risks associated with vaccines are objectively small, and even if they were not, they are necessarily no worse than the risk assumed by not being vaccinated, once it is understood that COVID is endemic and that the vaccines operate solely by introducing inert pathogens, letting the immune system do the work of developing a defense.

By now, practically all of us know someone who has had COVID, perhaps including an unvaccinated person with a severe case and an elderly vaccinated person with a mild case. Some of us may have noted how immunity to infections drops after six months, and those with boosters fare better when exposed in large unmasked gatherings. We cannot reasonably pretend that the health risk is negligible, nor that outcomes are not materially affected by vaccination. Hopefully, a demystified understanding of the quite ordinary processes by which vaccines operate will help remove hesitancy in more people.

The muon g-2 experiment: Physics beyond the Standard Model?

This past Wednesday, I attended the webinar that presented the first results of the muon g-2 experiment since it was transferred to Fermilab in Illinois. I had worked on this experiment when it was at Brookhaven National Laboratory on Long Island. The collective results of the Brookhaven experiments contradicted the theoretically predicted value for the muon’s magnetic moment or g-factor, suggesting that physics beyond the Standard Model may be needed to explain this phenomenon. Experiment deviated from theory by over 3 standard errors, enough to suggest a new phenomenon, but not enough to meet the accepted threshold of 5 sigma (standard errors) for a new discovery. The team at Fermilab hope to meet this threshold by reducing the systematic error (e.g., by improving the uniformity of the magnetic field by a factor of 3) and by having large statistics, using the lab’s accelerator as high-intensity source of muons (decaying from pions).

In the 15 years since the last published results from Brookhaven, the computed theoretical value had been fine-tuned, only amplifying the discrepancy with experiment. Dirac’s relativistic quantum mechanics predicts that the dimensionless magnetic moment of all charged leptons (electrons, muons, tau particles) should be exactly 2. Under quantum field theory, however, there should be small effects by self-interaction mediated by virtual particles. These virtual interactions should span all possible combinations within the Standard Model. The largest of these corrections, discovered by Schwinger, is a virtual photon interaction resulting in a deviation of  alpha/2pi, where alpha is the fine structure constant, approximately equal to 1/137, so this modifies g  to 2.00116. There are other, smaller scale corrections by other types of interactions. When we consider all of these, the consensus theoretical value for the muon’s anomalous magnetic moment, i.e., its deviation from 2, or g – 2, published in 2020 is:

116,591,810(43) x 10-11

where the parenthetic figure is the error in the value. The reason this theoretical computation has an error is that some of the contributions, notably those of quantum chromodynamics (QCD), cannot be computed exactly, due to the analytic unsolvability of the integrals. Instead, numerical approaches must be used. The largest source of error is the leading-order hadronic vacuum polarization (LO-HVP) contribution. There are two major approaches to modeling this hadronic contribution. The more purely computational approach is lattice QCD, where we approximate space-time as a discrete lattice with finite volume, and use Monte Carlo sampling to select points for computation. (We must sample points randomly so that the error is not proportionate to the large number of variables.) The other approach is to use dispersive methods, combined with experimental data on electron-positron cross-sections. The latter approach, though it is more data-driven and less purely computational, has the advantage of a smaller error. Dispersive techniques tend to have lower values for the hadronic contribution, and thus more strongly deviate from the experimentally measured muon g-2, which at Brookhaven was:

116,592,089(63)x 10-11

The theoretical consensus value differs from the Brookhaven results by 279 x 10-11, or 3.7 sigma, where the standard error sigma is the theoretical and experimental errors added in quadrature.

On April 7, 2021, the Fermilab team announced the results of its first run. This was not expected to meet the 5-sigma threshold, since there are not yet enough statistics. That problem should be surmounted when the next runs are analyzed, which will take at least another two years. Even with the smaller statistics, the reductions in systematic error already resulted in a smaller total error than Brookhaven. The investigators were blinded to their high-precision clock’s time scale, and they agreed in their by-laws to publish the results no matter what, once unblinded (to eliminate bias by cherry-picking results). The dramatic unblinding took place at the webinar, as two non-investigators revealed the handwritten clock scale in a sealed envelope: 39997844. This allowed the instant computation of the anomalous magnetic moment:

116,592,040(54)x 10-11

This was slightly lower than Brookhaven’s result, though within the error of both experiments. It was still different from theory by 230 x 10-11, or 3.3 sigma. When the Fermilab results are combined with those of Brookhaven, reducing the statistical error, we get an experimental result of:

116,592,061(41)x 10-11

This is a difference of 251 x 10-11, or 4.2 sigma, from the 2020 consensus value. The probability of this discrepancy being due to chance sampling choice is 1/40,000. (The 5-sigma threshold would be 1 in 3.5 million.) This result is already strongly indicative of the likelihood of physics beyond the Standard Model.

Or is it? Recall we have the unusual situation where a theoretical value has significant error. The contributions to the 2020 value, as identified by Aida al-Khadra at the Fermilab webinar, are:

  • 116,584,718.9(1) x 10-11 quantum electrodynamic (QED) contribution
  • 153.6 (1.0) x 10-11 weak interaction contribution
  • 6845(40) x 10-11 hadronic vacuum polarization (HVP) contribution
  • 92(18) x 10-11 hadronic light-by-light (HLbL) contribution

The hadronic contribution has the largest error and the second-largest value, so improving this calculation has the most importance for confirming if the muon g-2 result really does contradict the standard model. Over the last 15 years, theoreticians reduced the error in the computed muon g-2, yet its value remained within the range of previous calculations.

On the same day as the result announcement, Chao et al. published an improved value of the hadronic light-by-light scattering contribution: 106.8(14.7) x 10-11. They used lattice QCD. Although this slightly increases the HLbL contribution, it is not enough to account for the discrepancy between theory and experiment.

More notably, Borsanyi, Fodor et al. published in Nature (again on April 7) a significant result on the leading hadronic contribution to the muon magnetic moment. While its error is larger than that of dispersive techniques, it is by far the smallest error yet achieved by ab initio QCD. Their value for the LO-HVP contribution is 7075(55) x 10-11. If their value is used instead of the consensus, we get a muon g-2 that is close to the Brookhaven result, and in near-perfect agreement with the combined Brookhaven-Fermilab result! Factoring in the HLbL by Chao et al., I get a revised theoretical value of:

116,592,055(57) x 10-11

which is within 0.1 sigma of the experimental value (BNL-Fermilab).

This would seem almost too good to be true, were it not for the fact that Borsanyi, Fodor et al. certainly could not have known the Fermilab value in advance, as this was not known even to the investigators themselves! Nonetheless, we must be wary of arbitrarily selecting computations that agree with experiment, even though these particular computations (Borsanyi; Chao) happen to be the best in their respective classes (Lattice QCD for HVP; HLbL). We would need grounds for preferring the improved lattice QCD models over the dispersion techniques, before deciding this issue. While Borsanyi et al. agree with experiment, they now have a 2-sigma discrepancy with other theoretical calculations, so this disagreement must be resolved.

If the Borsanyi, Fodor et al. (2021) result is confirmed, then there would indeed be no new physics indicated by the muon g-2 experiment, and the Standard Model would have withstood its most ultra-precise test yet. This would strongly suggest that our inventory of the fundamental particle and interaction types in nature is in fact complete.

Methodological Problems in Epidemiology

As much of the world looks to slowly ramp down COVID-19 isolation measures, it remains unclear whether this global social experiment should be considered wise or foolish. The prevalence of infections is < 1% in every country in the world except the microstate San Marino. This is better than projected by most models, and could be interpreted as a success for isolation, an overestimation of the virus's infectiousness, or a natural seasonal effect. This question is not resolvable insofar as it depends on the counterfactual of what would have happened if isolation was not imposed. As mentioned in the last post, spread to 60% of the population with millions of deaths was never realistic. That alarmist scenario relied on a naive application of epidemiological models that have poor predictive ability. Using an SEIR model with the estimated parameters for COVID-19, one indeed gets a grim picture. Yet if one were to insert the parameters for seasonal influenza (R0 = 1.3, avg. incubation period = 2 days, avg. duration of infectiousness = 5 days, mortality rate = 0.1%) into the same model, you would have over 40% infected and 150,000 fatalities in the first year, far more than what occurs in reality. The reproduction rate of a disease depends not only on the duration of contagiousness, but also the likelihood of infection per contact (secondary attack rate) and contact rate. These last two are highly variable by region, social structure, and perhaps even individual physical susceptibility.

Conventional compartmentalized models have poor predictive ability for seasonal influenza, as they do not account for other factors besides herd immunity and isolation that could slow the spread of disease. A Los Alamos study was able to create a model with parameters that fit to past seasonal data and should hopefully have predictive power for future seasons. Such an approach, however, is useless for novel pandemics. As the authors note, these models are all highly sensitive to choice of prior parameters, but we cannot know these until after the epidemic has run its course.

The problem of predictive modeling is exacerbated by the poor quality of public health data, which is often woefully incomplete or inconsistent, with categorizations often driven by policies or other unscientific criteria. Public health systems do a better job of recording the number of infected than they do for those exposed or recovered. Even here they are limited to those who seek medical treatment, and often diagnoses are made by symptoms rather than definitive tests. Cause of death on death certificates is driven by bureaucratically imposed standards. Even in scientific studies, researchers classify subjects according to one or another cause of death, and treat comorbidities as risk factors increasing the chance of death by the primary cause. It would be more rigorous to acknowledge that there is not always a single cause of death, and instead to treat comorbidities as contributing causes by factor analysis. This would let us know the mortality contribution of each disease to the population, but it would remain generally impossible to give a single “cause of death” for each individual.

Some parameters of COVID-19 are fairly well known at this point. The infected are contagious from 48 hours before showing symptoms to 3 days afterward. The secondary attack rate is surprisingly low, only 0.45% (compared to 5%-15% for seasonal flu). Thus the relatively high R0 is attributable not so much to high contagiousness, but to the longer duration of contagiousness, especially while presymptomatic, so that infected people have more contacts while contagious than seasonal flu victims would. The 2009-10 H1N1 pandemic, by contrast, had a secondary attack rate of 14.5%, yet it infected 61 million out of 307 million in the US, just under 20%. It is implausible that COVID-19, with its much lower attack rate, could ever attain a comparable prevalence level.

Why, then, are the death statistics so much higher than would be suggested by the low infectiousness and low prevalence? On the one hand, many jurisdictions, notably New York, have decided to include so-called “probable” COVID-19 related deaths, and most public health data includes no serious attempt to account for comorbidities as causal factors, though they occur in well over 90% of fatal cases. On the other hand, the increase in deaths versus last year in many areas greatly exceeds even this high count, so it could be argued we are undercounting COVID-19 fatalities. The problem here is that many of the excess deaths could be caused not by COVID-19 per se, but by the overloading of medical facilities, resulting in less than immediate critical care. Some of these excess deaths might even be caused by the quarantine measures, as diagnostic and non-emergency medical visits have been cancelled.

It would not be uncommon for the number of deaths to be revised upward or downward by a large factor retrospectively. A year after the H1N1 pandemic, a study suggested that the deaths attributed to H1N1 ought to be revised 15 times higher. Whether H1N1 deaths were undercounted or COVID-19 deaths are overcounted remains to be seen, and is unlikely to be resolved, given the problems of data and methodology we have touched upon.

The truly frightening thing is that major public health policy decisions are made on woefully inadequate data and modeling, which will likely be radically revised after each pandemic passes, and the moment for decision-making is past. Public health officials will always err on the side of caution, but as we have noted in the previous post, this is not practicable for an indefinite period of time. At some point we must be willing to poke our heads out of our caves and assume the risk of living.

After all, as recently as the early twentieth century, people went about their business even while living under the threats of smallpox, polio, and measles, any one of which had higher infectiousness and fatality rates than the current pandemic. By objective criteria, there is nothing exceptional about COVID-19 as an infectious disease. What is exceptional is the post-WWII belief that life should be free from deadly risk, enabled by technological means to perform many service economy jobs remotely.