Linear regression is trickier than you think

 Preamble

In my last two posts, I talked about statistical confounding: why it matters in statistics, and what it looks like when it gets really extreme (Simpson's Paradox)

In my next few blog posts, I want to talk about some tricks for controlling statistical confounding in the context of multivariate linear regression, which is about the simplest kind of model that can be used to relate more than 2 variables. Although I've taken a full load of statistics classes including a whole course on multivariate linear regression alone, I never learned how to choose the right variables to include for a desired analysis until I came across it in Richard McElreath's book 'Statistical Rethinking'. 

In short, it's likely to be something that most machine learning and data science practitioners wouldn't ordinarily pick up in a class on regression, and it's useful and kind of fun. 

Controlling confounding requires drawing hypothetical diagrams of how your variables might relate causally to each other, doing some checks to determine whether the data conflict with the hypotheses, and then using the diagrams to derive sets of variables to exclude and include. It's a nice interplay between high level thinking about causality, and mechanical variable selection. 

This week's post is an introduction where I'll set the stage a bit.

Multivariate linear regression

Multivariate linear regressions are the first type of frequentist models you encounter as a statistician. They are used to relate an outcome variable $Y$ in a data set to any number of covariates $X_i$ which accompany it. For example, the height that a tree grows this year, $H$, might be associated with several continuous covariates, such as the number of hours of sunlight it receives per day $S$, the amount of water it receives per day $W$, and the iron content of the soil around it, $I$. These variables in turn may be associated with each other; for example, if the tree is not artificially watered, then $S$ and $W$ may be negatively correlated, since the sun doesn't usually shine when it's raining. 

The model specification below is for a Bayesian linear regression model with $n$ covariates, and no higher-order terms. The distribution of $Y$ is normal, with a mean that linearly depends on the covariates $X_i$, and a variance parameter. All the parameters have priors, which the model specifies. Models like this are usually fitted using methods that sample the posterior distribution of the parameters given the observed data.  The results of Bayesian model fitting are usually very similar to frequentist model fitting results when there is sufficient data for analysis.

$
\begin{aligned}
Y &\sim N(\mu, \sigma^2) \\
\mu &= \alpha + \beta_1X_1 + ... + \beta_nX_n \\
\alpha &\sim N(1, 0.5) \\
\beta_j &\sim N(0, 0.2) \text{ for } j=1,...,n \\
\sigma & \sim \text{exp}(1)
\end{aligned}
$

The fact that the scale of the modeled parameter $\mu$ is the same as that of $Y$, and the absence of higher-order terms (such as $x_1x_3$), make it easy to interpret the meaning of each slope parameter: $\beta_j$ is the expected change in the value of the outcome variable when the covariate $X_j$ changes by one unit. The assumption that this expected change is always the same, independent of the values of the $n$ covariates, is built right into this model.

This model is about as simple a statistical model as you can have for modeling data sets with a lot of variables. But when I was studying multivariate regression, the covariates used for modeling were often chosen without much explanation. Sometimes we would use all the variables available, and sometimes we would only use a subset of them. It wasn't until later that I learned how to choose which variables to include in a multivariate regression model. The choice depends on what you're trying to study, and on the causal relationships among all the variables.

And, of course, you don't know the causal relationships among the variables -- often, this is what you're trying to figure out by doing linear regression -- so you need to consider several possible diagrams of causal relationships.

The ultimate goal is to get statistical models that clearly answer your questions, and don't 'lie'. Actually, statistical models never lie, but they can mislead. Statistical confounding occurs when the apparent relationship between the value of a covariate $X$ and the outcome variable $Y$, as measured by a model, differs from the true causal effect of $X$ on $Y$. The effects of confounding can be so extreme that they result in Simpson's Paradox reversals, where the apparent association between variables is the opposite of the causal association.

It takes some know-how to eliminate confounding. Sometimes you have to be sure to include a variable in a multivariate regression in order to get an unconfounded model; sometimes including a variable will *cause* confounding.

Sometimes, nothing you can do will prevent confounding, because of an unobserved variable. But here is what you can do:

1. You can hypothesize one or more causal diagrams that relate the variables under study. You can consider some that include variables you may not have measured, in order to anticipate problems.
2. You might be able to discard some of these hypotheses, if the implied condiional independence relationships between the variables aren't supported by the data.
3. You can learn how to choose what variables to include and exclude, on the basis of the remaining hypothetical causal diagrams, to get multivariate regressions that aren't confounded.
4. You can also determine when confounding can't be prevented, because you would need to include a variable that isn't available. 

In succeeding posts, I'll show you how to go about doing this yourself.

A new productivity trick



This past week, instead of posting an article from my Zettelkasten, I wrote an article for LinkedIn on a new self-management trick I've been using. I've found that dissociating my 'worker' persona from my 'manager' persona -- literally, pretending that they are different people -- has been a useful aid for me to getting work planned and done. 

I certainly don't want to give the impression that I'm any master of productivity and time management. I'm still looking for the perfect regimen that I can stick to. I've tried quite a few of them, and kept a couple of them; I'm a fan of Getting Things Done and Time Blocking, and I use both of those approaches when the mood to get my ducks in a row comes upon me. I've concluded that the best I can do is to have an arsenal of productivity tricks that I can deploy when I'm feeling uninspired (including the 'split-personality' trick), and to establish firm habits around scheduled work times. I can be found with my butt in my seat at my desk at the usual hours during every work day. That is the only trick I've ever found that really works consistently. 


Simpson's Paradox: extreme statistical confounding

 

Preamble

 

Simpson's Paradox is an extreme example of the effects of statistical confounding, which I discussed in last week's blog post, "Statistical Confounding: why it matters".

Simpson's Paradox can occur when an apparent association between two variables $X$ and $Y$ is affected by the presence of a confounding variable, $Z$. In Simpson's Paradox, the confounding is so extreme that the  association between $X$ and $Y$ actually disappears or reverses itself after conditioning on the confounder $Z$. 

Simpson's paradox can occur in count data or in continuous data. In this post, I'll talk about how to visualize Simpson's paradox for count data, and how to understand it as an example of statistical confounding.

It isn't actually a paradox; it makes complete sense, once you understand what's going on. It's just that it's not what our intuition tells us should happen. And whether it's 'wrong' depends on what goal you're shooting for. In the example below, if you want to make a choice for yourself based on understanding the relative effectiveness of the two treatments, you'd be best off choosing Treatment A. But if your goal is prediction -- who is likely to do better, a random patient who gets Treatment A or Treatment B? -- you're best off with Treatment B.
 
If that confuses you, keep reading.  
 

A famous example: kidney stone treatments

Here is a famous example of Simpson's Paradox occurring in nature, in a medical study comparing the efficacy of kidney stone treatments (here's a link to the original study).

In this example, we are comparing two treatments for kidney stones. The data show that, over all patients, Treatment B is successful in 83% of cases, and Treatment A is successful in only 78% of cases.

However, if we consider only patients with large kidney stones, then Treatment A is successful in 73% of cases, whereas Treatment B is successful in only 69% of cases.

And if we consider only patients with small kidney stones, the Treatment A is successful in 93% of cases, where Treatment B is successful in only 87% of cases.

Suppose you're a kidney stone patient. Which treatment would you prefer? Since I'd presumably have either a small kidney stone or a large one, and Treatment A works better for either one, I'd prefer Treatment A. But looking at all patients overall, this result says Treatment B is better. Does this mean that if I don't know what size kidney stone I have, I should prefer Treatment B? (No). Why is this happening?

This is happening because the small-vs-large-kidney stone factor is a confounding variable, as discussed in this post on statistical confounding from last week.

The diagram below shows the causal relationships among three variables applying to every kidney stone patient. Either Treatment A or B is selected for the patient. Either the treatment is either considered successful, or it isn't. And the confounding variable is in red: either the patient has a large kidney stone, or they do not.


The size of the kidney stone, reasonably, has an impact on how successful the treatment is; similarly, we're assuming the treatment choice affects the success of the treatment. 

But here's the confounding factor: the stone size, in red, also affects the choice of treatment for the patient. Treatment A is more invasive (it's surgical), and so it's more likely than Treatment B to be applied to severe cases with larger kidney stones. Conversely, Treatment B is more likely to be applied to smaller kidney stone cases, which are lower risk to begin with. Since the size of the kidney stone is influencing the choice of Treatments A vs. B, the causal diagram has an arrow from the size variable to the Treatment variable. And this is the 'back door', from the stone size variable into the Treatment choice variable, that is causing the confounding.

To see what is actually happening, look at the total numbers of patients in each of the four kidney stone subgroups:

  • Treatment A, large stones: 263
  • Treatment A, small stones: 87
  • Treatment B, large stones: 80
  • Treatment B, small stones: 270

Clearly the size of the stone is impacting the treatment choice. 

But stone size is also a huge predictor for treatment success: the larger the stone size, the harder it is for any treatment to succeed. So a higher proportion of small stone, Treatment B cases succeed than of large stone, Treatment A cases. And that's what's causing Simpson's Paradox.

Visualizing Simpson's Paradox for count data

 
Suppose we're running an experiment to assess the effect of a variable $x$ on a 'coin flip' variable $Y$. Each time we flip the coin, we'll call that a trial T. Each time $Y$ comes up heads, we'll call that a success S. The graph above has T on the x axis, and S on the y axis. Many experiments are modeled this way. In the kidney stone example, the variable $x$ refers to the choice of treatment, and the variable $Y$ refers to whether it had a successful outcome.

During data analysis, we'll break down the total sample of kidney stone patients into subgroups by whether they got Treatment A or B. We can break it down further in any way we choose; for example, we can subset the data by age, by gender, or by both at once. Or we can further subset the patients based on whether they had a large kidney stone. This subsetting will result in groups which we'll denote by $g$. 

We can visualize subgroup $g$'s experimental results by placing it in the graph as a vector $\vec{g}$ from the origin to the point $(S_g,T_g)$, where $T_g$ is the number of patients in the group, and $S_g$ is the number of patients in the group with successful outcomes. 
 
The slope of $\vec{g}$ is $S_g/T_g$, so the slopes of the vectors therefore indicate the success rate within each subgroup (note that the slopes of these vectors can never be larger than 1, since you can't have more successes than trials). When you compare the success rates between groups in an experiment, you only need to look at the slopes of these vectors -- the sizes of the subgroups are not visible to you. But it's the disparities in subgroup sizes that cause Simpson's paradox to occur. 
 
The lengths of the vectors are a rough indicator of how many patients there were within each subgroup; the larger the number of patients in the group, $T_g$, the longer $\vec{g}$ will be.

In the diagram above, we see that the subgroups Treatment A for small stones, and Treatment B for large stones, were much smaller in length than the other two (because there were fewer trials in those subgroups). But their lengths do not matter when considering the per-group success rates $S_g/T_g$; all that matters is their slopes. Treatment A's slope for small stones is higher than Treatment B's slope for small stones; the same holds the large stone groups. So within each subgroup, Treatment A is more successful.

But if we restrict our attention to the two longest vectors in the middle, we can see that the Treatment B, small stones vector has a higher slope than the Treatment A, large stones vector. This is mainly due to the fact that people with large kidney stones generally have worse outcomes, regardless of how they are treated.

In the diagram below, we are looking at the resulting vectors when all the Treatment A and B patients are grouped together, regardless of stone size. 

We get the vector corresponding to the combined group in Treatment A by summing the two green Treatment A vectors. Similarly, we sum the two black Treatment B vectors to get the aggregated Treatment B vector. When we do this, we can see that the Treatment B vector has the higher slope. 


 
This happens because, when we add the green vectors together to get the total Treatment A vector, the result is only slightly different from the much longer Treatment A, small stones group vector. Similarly, the summed vector for Treatment B is only slightly different from the much longer Treatment B, large stones group vector. 
 
As a result, the combined Treatment A vector has a lower slope than the combined Treatment B vector, making it look less effective overall. This is Simpson's Paradox in visual form. 

Simpson's Paradox reversals don't occur often in nature, though there are a few examples (like this one). But subtler forms of statistical confounding definitely do occur, all the time, in settings where they affect the conclusions of observational studies.

Linear regression is trickier than you think

 Preamble In my last two posts, I talked about statistical confounding: why it matters in statistics, and what it looks like when it gets r...