Nothing to see here? Non-inferiority approaches to parallel trends and other model assumptions
Abstract: Many causal models make assumptions of "no difference" or "no effect." For example, difference-in-differences (DID) assumes that there is no trend difference between treatment and comparison groups' untreated potential outcomes ("parallel trends"). Tests of these assumptions typically assume a null hypothesis that there is no violation. When researchers fail to reject the null, they consider the assumption to hold. We argue this approach is incorrect and frequently misleading. These tests reverse the roles of Type I and Type II error and have a high probability of missing assumption violations. Even when power is high, they may detect statistically significant violations too small to be of practical importance. We present test reformulations in a non-inferiority framework that rule out violations of model assumptions that exceed some threshold. We then focus on the parallel trends assumption, for which we propose a "one step up" method: 1) reporting treatment effect estimates from a model with a more complex trend difference than is believed to be the case and 2) testing that that the estimated treatment effect falls within a specified distance of the treatment effect from the simpler model. We show that this reduces bias while also considering power, controlling mean-squared error. Our base model also aligns power to detect a treatment effect with power to rule out meaningful violations of parallel trends. We apply our approach to 4 data sets used to analyze the Affordable Care Act's dependent coverage mandate and demonstrate that coverage gains may have been smaller than previously estimated.
Paper Prompts
Sign up for free to create and run prompts on this paper using GPT-5.
Top Community Prompts
Collections
Sign up for free to add this paper to one or more collections.