Statistical confounding: why it matters

Preamble

This article is a brief introduction to statistical confounding. My hope is that, having read it, you'll be more on the lookout for it, and interested in learning a bit more about it. 

Statistical confounding, leading to errors in data-based decision-making, is a problem that has important consequences for public policy-making. This is always true, but it seems especially true in 2020-2021. Consider these questions:

1. Is lockdown the best policy to reduce COVID death rates, or would universal masking work just as well?

2. Would outlawing assault rifles lower death rates due to violence in the US? 

3. Would outlawing hate speech reduce the incidence of crime against minorities in the US?  

4. What effect would shutting down Trump's access to social media have on his more extreme supporters?

If you're making data-based decisions (or deciding whether to support them), it's important to be aware that confounding happens. For people practicing statistics, including scientists and analysts, it's important to understand how to prevent confounding from influencing your inferences, if possible. 

Identifying and preventing confounding is a topic that I haven't seen covered in most places -- not even in my multivariate linear regression classes. It's explained beautifully in Chapter 5 of Richard McElreath's book "Statistical Rethinking", which I highly recommend if you're up for a major investment of time and thought. 

This topic is the first in a cluster about statistical inference from my slipbox (what's a slipbox?).

Important note: I use the 'masking' example below as a case where confounding might hypothetically occur. I am not suggesting for a moment that masks don't fight COVID transmission. I am a huge fan of masking! Even though I wear glasses and am constantly fogged up.

---

Here's the entire 'statistical confounding' series:

- Part 1: Statistical confounding: why it matters (this post): on the many ways that confounding affects statistical analyses 

- Part 2: Simpson's Paradox: extreme statistical confounding: understanding how statistical confounding can cause you to draw exactly the wrong conclusion

- Part 3: Linear regression is trickier than you think: a discussion of multivariate linear regression models

- Part 4: A gentle introduction to causal diagrams: a causal analysis of fake data relating COVID-19 incidence to wearing protective goggles

- Part 5: How to eliminate confounding in multivariate regression: how to do a causal analysis to eliminate confounding in your regression analyses   

-Part 6: A simple example of omitted variable bias: an example of statistical confounding that can't be fixed, using only 4 variables.

Statistical Confounding

Suppose you've got an outcome you desire: for example, you want covid cases per capita in your state to go down. Give COVID cases per capita a name: call it $Y$. 

You've also got another variable, $X$, that you believe has an effect on $Y$. Perhaps $X$ is the fraction of people utilizing masks whenever they go out in public. $X=0$ means no one is wearing masks: $X=1$ means everyone is.

You believe, based on numbers collected in a lot of other locations, that the higher the value of $X$ is, the lower the value of $Y$ is. After a political fight, you might be able to require everyone to mask up by passing a strict public mask ordinance: in this case, you would be forcing $X$ to have the value 1.


In order to determine whether to do this, you set up an experiment, a clinical trial for masks. You start with a representative group of people, and set half of them at random to wear masks whenever they go out, and the other half to not wear masks. The 'at-random' piece is important here, as it is in clinical trials. Setting $X$ forcibly to a specific value, chosen at random, can be thought of as applying an operator to $X$: call it the "do-operator". 

The do-operator is routinely applied in experimental science. For example, in a vaccine clinical trial, people aren't allow to choose whether they get the placebo or vaccine: one of these possibilities is chosen at random. This lets you assess the true causal effect of $X$ on $Y$.

If your experiment shows that mask-wearing is effective at lowering the per capita COVID case rate, you can then support a mask-wearing ordinance, with confidence that the ordinance will have the desired effect 'in the wild'. 

Statistical confounding occurs when the apparent relationship $p(Y|X)$ between the value of $X$ and the value of $Y$, observed in the wild rather than under experiment, differs from the true causal effect of $X$ on $Y$, $p(Y|do(X)$.

To put this in the context of masking, suppose we've observed in the wild that people who wear their masks when they go outside the house have lower COVID case rates per capita than people who don't. If we enforce a mask ordinance on the basis of this observation, it's possible that we might find that the law has no effect on the COVID case rate.  

This might happen because of the presence of other variables which affect the outcome variable, called confounder variables. In the case of the masking question, it may be that an important confounder is whether a person is concerned about catching COVID. If a person is concerned, it may be that in addition to wearing masks when they go out, they are also avoiding close contact with people outside their household. And perhaps that is the true cause of the reduction in the COVID rate among people who wear masks.


If it is the case that avoiding in-person meetings is the real cause of the lowered case rates, rather than wearing masks, then enforcing a masking law will not have the desired effect of reducing the case rate. And you definitely want to avoid passing an ineffective ordinance, for obvious reasons.

Next week, I'll talk about Simpson's Paradox, an extreme example of statistical confounding.

COVID vaccine efficacy

Preamble

This note started out as a reminder to myself about the definition of relative risk and vaccine efficacy, and morphed into a perusal of the FDA briefs on the Pfizer, Moderna, and J&J vaccines (links to all 3 briefs are at the bottom of the article).
 
It's really worth looking at the actual numbers of COVID cases among people in the studies -- they are surprisingly low. In some cases, they are so low that they make inference about vaccine efficacy hard. 
 
This is my first close look at the outcome of a clinical study. You have to make a lot of semi-arbitrary decisions, it seems, in order to design a clinical study. Even something as simple as a difference of 5 years in your cutoff for the 'older' age group can have an effect on inference. The 3 teams made all sorts of different decisions that make it hard to compare their outcomes head-to-head.

Above all, while writing this note, I wished many times that I could have gotten my hands on the actual data. I guess the current age of copious open data has spoiled me. 

Disclaimer: I do not have medical training, and nothing written here should be taken as medical advice. 

 

Definition of efficacy

 
Vaccine efficacy is defined as:

$$1-\text{relative risk} = 1-\frac{\text{Prob(outcome|treatment)}}{\text{Prob(outcome|no treatment)}}.$$

If the experiment has roughly equal treatment and control groups (as all the vaccine clinical trials did), then the probabilities can be replaced by counts:

$$1-\text{relative risk} \approx 1-\frac{\text{Count(outcome|treatment)}}{\text{Count(outcome|no treatment)}}.$$

So 95% effectiveness means that

$$\frac{\text{Count(outcome|treatment)}}{\text{Count(outcome|no treatment)}}\approx 1 - 0.95 = \frac{1}{20};$$

that is, for every 1 event in the vaccinated group, there were 20 in the unvaccinated group. 

What was the measured event (aka Primary Endpoint) used to measure vaccine efficacy?

 
TL;DR:  Patients needed to have more symptoms in order to satisfy the J&J or Moderna primary endpoints than to satisfy the Pfizer primary endpoint. All confirmed cases in all 3 clinical trials required positive PCR tests.

For Moderna: First Occurrence of confirmed COVID-19 (as defined by an adjudicated committee using a formal protocol) starting 14 Days after the Second Dose. Confirmed COVID-19 is defined on page 13 of the FDA brief, and requires at least 2 moderate COVID symptoms (i.e., fever, sore throat, cough, loss of taste or smell) or at least 1 severe respiratory symptom, as well as a positive PCR test. 

Moderna primary endpoint results.


For Pfizer: Confirmed COVID-19 beginning 7 days after the second dose. Confirmed cases had at least one symptom from the usual list of COVID symptoms, and a positive PCR test for COVID within 4 days of the symptom.

Pfizer primary endpoint results.

for J&J: 'Molecularly confirmed' (by a PCR test) moderate-to-severe/critical COVID infection, measured at least 14 and at least 28 days post-vaccination. They also studied the rates of severe/critical COVID, which required signs of at least one of severe respiratory illness, organ failure, respiratory failure, shock, ICU admission, or death. Definitions of the COVID illness levels are on page 15 of the FDA brief, and are similar to the Moderna definition of Confirmed COVID-19.


 

Thoughts about the results

 
Moderna and Pfizer both reported very high efficacies of about 95%. These were point estimates, i.e., single values summarizing the measured efficacy.

But the confidence interval (CI) is the thing to look at for each result, not the point estimate. The CI gives you information about not only the point estimate for efficacy, but about the certainty of the efficacy measurement. The CI for efficacy always contains its point estimate, but the wider the CI, the less confidence you can have in the point estimate. 

 
Moderna
 
 
The vaccine was tested with roughly equal control and vaccine arms. There were about 21,600 participants in each arm.

The 95% CI for people aged 18-65 is (90.6%, 97.9%), which is very high.

The point estimate of efficacy for people aged 65 and up was a bit lower, at 86.4%. The 95% confidence interval was (61.4%, 95.5%). The reason the confidence interval is wider is that only about 7000 people over 65 were enrolled in the clinical trial, and there were only 33 covid cases among that group (as opposed to 163 in the younger group). This caused the CI to be wider, reflecting increased uncertainty as to the true efficacy of the vaccine.

If the cutoff for the older age group were lower, there would have been more cases in that group, and more confidence in the result. It would have been nice to have access to the raw clinical trial data.

 
Pfizer
 
 
The vaccine was tested with roughly equal control and vaccine arms. There were about 18200 people in each arm.

The division along age lines in this table occurs at age 55 years, rather than 65 years. This made the age groups a bit more balanced and resulted in more cases in the 55+ age group. Thus the 95% CI for the older age group is narrower than Moderna's, at (80.6%, 98.8%). The results for the younger group are even better.

 
Johnson & Johnson
 
 
J&J had two endpoints, one corresponding to moderate illness, and one to severe and critical illness. J&J has emphasized the efficacy of their vaccine against their endpoint of severe or critical COVID-19, so that's where I focused my attention.

The J&J study had some issues in its design that make it hard to draw conclusions. Because severe COVID is rarer, there were fewer cases of it in the final analysis, which means increased uncertainty for the conclusions. They also ran studies across several countries with wildly different base rates of covid, and with different dominant COVID
-19 strains. This makes me think nervously about aggregation confounding (Simpson's paradox) when all the results are thrown into one bucket. Again, access to the raw data would have been nice.

J&J's point estimate of 85% efficacy in the US against severe covid, which you hear about all the time, is of questionable value, because the 95% CI was (-9,% 99.7%)! That's because there were only 8 severe COVID cases in the US arm of the trial -- 7 in the placebo group and one in the vaccine group. That's not enough to base any conclusions on. The same problem with a low total case count was found in Brazil.

Probably the best estimate of J&J efficacy against severe covid came from the South African arm of the study, where the number of severe cases was largest (26 severe cases in both arms of the study after 28 days post-vaccination -- 22 in the placebo group and 4 in the vaccinated group). The point estimate there was 81.7%, and the 95% CI was (46.2%, 95.4%). Remember that the tough South African COVID variant was spreading during this study, so that's pretty good news as to J&J's efficacy against that variant.

If you throw all the people in those 3 locations into one bucket, you get this table describing the aggregate result for severe covid:

J&J aggregate results across all sites for severe COVID

I have two thoughts about this; one is that I'm suspicious of aggregation effects, due to the fact that the studies in the 3 countries were so different. The second is that the evidence for the effectiveness of J&J's vaccine is significantly stronger for onset 28 days post-vaccination than for 14 days post-vaccination; the jump in efficacy against severe COVID in the younger age group is more than 10 percentage points.

So, although I've read that you can consider yourself officially "J&J-immunized" after 14 days post-vaccination -- I intend to wait another 2 weeks after that, till the 28-day mark, before really relaxing the rules. 

 
References
 
 

Launching "From my Slipbox"

 

Niklas Luhmann's original Zettelkasten

This post is the first in a series I'm launching on statistics, machine learning, productivity, and related interests: "From my slipbox".

A slipbox ("Zettelkasten" in German, translating to card-box) is a personal written record of ideas that you've gotten from things you've read, seen, or heard. Each Zettel is a card containing a writeup of a single concept that you've thoroughly digested and translated into your own words. The cards are also annotated with the addresses of other, related ideas captured in your slip-box, allowing you to follow the threads of ideas.

The Zettelkasten idea is credited to mid-20th-century German sociologist Niklas Luhmann, who spent decades building a physical slip-box in order to flesh out his ideas on a theory of society. It was constructed like a library card catalog, with ordered unique IDs for every card/idea (see the photo above -- it actually was housed in a library card catalog, apparently). 

 These days, a slip-box is more likely than not to be digital, and there is specialized software to support it. The Archive seems to be especially popular among ZK aficionados, but I just noticed that it is only supported on MacOS. My own choice of tool is Obsidian.md, which is supported on all architectures (including Linux!), and supports math markdown. Both tools use local markdown files so that your data is not stored in a proprietary format (links to both tools are below). I store my ZK in a private Github repository for safety and versioning support.

There are plenty of people who build their ZK using physical cards and boxes, just as Luhmann did, just for the pleasure of it. I understand that pleasure -- I think by writing longhand -- but there are huge benefits to hyperlinking and digital backups.

I took up Zetteling very recently, in January 2021. I've always written copious longhand notes about technical things I've read and digested, some of which have become the 'writeups' I've posted in the past on topics like Kalman filters, the backpropagation algorithm, and design of experiments. But my longhand notes sometimes get lost or accidentally thrown out, and the effort required to get from my handwritten notes to material worth publishing is sometimes a deterrent.

I got excited about making a Zettelkasten for the following reasons:

1. It encourages my writing habit

2. It lets me put my thoughts into semi-formal writing immediately, rather than waiting until I have a large writing job to do

3. It fights the brain leakage problem, wherein I quickly forget the details of what I've learned

4. Luhmann claimed that new ideas emerged spontaneously from his Zettelkasten, simply because of its massive size and interconnectedness -- sort of like a huge neural network developing consciousness (I'd like to see that happen!)

5. The promise of more easily generating quality written content from existing Zettels is appealing

6. The idea is for you to spend time 'curating' your slip-box -- rereading your ideas, making new connections, etc. -- which aids my memory, appeals to my love of organization, and makes me feel productive even when I'm too tired to actually write.

A little over a month after getting started, I've written around 150 Zettels on topics such as neural nets, productivity, project planning, variational calculus, causality, statistical modeling, and on Zetteling itself. Each one is a sort of soundbite of some story or idea I found interesting.

Every Friday, I'll be posting a Zettel from my Zettelkasten -- often technical, but sometimes relating to consulting, productivity, or other topics. 

I am hoping that this series results in conversations, and occasional 'super-Zetteling' -- making new connections to interesting content from minds beyond my own.

Some Zettelkasten resources:

Trouble that you can't fix: omitted variable bias

  credit: SkipsterUK ( CC BY-NC-ND 2.0) Preamble In the previous post in this series, I explained how to use causal diagram...