Two‐Stage Residual Inclusion: An Overview

Often times, researchers want to measure the effect of certain interventions in the real-world. Doing this in practice is often difficult.  For instance, consider measuring health outcomes among individuals who visit doctors compared to those who don’t.  Inevitably, individuals who visit doctors will have worse outcomes.  Why?  Are doctors killing patients?   This is clearly a selection effect whereby patients who visit doctors were sicker to begin with and doctors almost certainly improve patient health.

An alternative approach would be to use randomized controlled trials (RCT).    RCTs are the gold standard able to determine an an intervention’s causal effect, but they do have their limitations: restrictive inclusion criteria, high expense, short-follow-up.

Other approaches–instrumental variables, or two-stage least squares (2SLS)–have been used to identify causal effect in the real world.  These approaches identify an “instrument” which is correlated with the intervention of interest, but is uncorrelated with the outcome of interest except through the intervention.  One problem with these approaches is that they assume a linear relationship between the variables of interest and outcomes.

The use of two-stage residual inclusion

An alternative approach is to use two-stage residual inclusion (2SRI).  A paper by Terza et al. (2018) outline how implement this approach which can incorporate non-linear relationships.  Let us say we want to estimate the following relatinship:

Y = exp(Xeβe + Xoβo + Xuβu) + ε

where Y is the outcome of interest, Xe is the endogenous variable, Xo is the exogenous, observed variable, and Xu is the unobserved confounding factor. The coefficients are the vectors β. and ε is the residual.

The process for estimate this relationship is straight forward.

First, one can use non-linear least squares to estimate the coefficients α in the estimating equation:
Xe = exp(Wα) + Xu,

where W = [Xo W+] which is a vector of the observed exogenous variables and the instrument W+. One then can calculate the results as X^u = Xe – exp(Wα^) where the carrot represents the estimated or fitted values.

In the second stage, one simply substitutes X^u for the unobserved Xu into the original equation. In short, in the second stage one would estimate:
Y = exp(Xeβe + Xoβo + X^uβu) + ε

While this approach will give you the unbiased estimates of the true coefficient values, the standard errors will be incorrect as they need to take into account that X^u was an estimated and not known quantity. To solve this problem, the author presents a number of solutions:

There are three possible approaches to calculation of the corrected standard errors: (1) bootstrapping; (2) the resampling method proposed by Krinsky and Robb (1986, 1990)…and (3) [asymptotically correct standard errors]…derived from standard asymptotic theory.

Note that in the example provided in this post, a non-linear least squares regression was appropriate but in other cases, a maximum likelihood or GLM estimation strategy would be needed.

If you are interested in applying this approach, do reach the whole article as the appendices also contain useful Stata code for implementation purposes.

Source: