Efficiency and Robustness of Rosenbaum's Regression (Un)-Adjusted Rank-based Estimator in Randomized Experiments
Abstract: Mean-based estimators of the causal effect in a completely randomized experiment may behave poorly if the potential outcomes have a heavy-tail, or contain outliers. We study an alternative estimator by Rosenbaum that estimates the constant additive treatment effect by inverting a randomization test using ranks. By investigating the breakdown point and asymptotic relative efficiency of this rank-based estimator, we show that it is provably robust against outliers and heavy-tailed potential outcomes, and has asymptotic variance at most 1.16 times that of the difference-in-means estimator (and much smaller when the potential outcomes are not light-tailed). We further derive a consistent estimator of the asymptotic standard error for Rosenbaum's estimator which yields a readily computable confidence interval for the treatment effect. We also study a regression adjusted version of Rosenbaum's estimator to incorporate additional covariate information in randomization inference. We prove gain in efficiency by this regression adjustment method under a linear regression model. We illustrate through synthetic and real data that, unlike the mean-based estimators, these rank-based estimators (both unadjusted or regression adjusted) are efficient and robust against heavy-tailed distributions, contamination, and model misspecification. Finally, we initiate the study of Rosenbaum's estimator when the constant treatment effect assumption may be violated.
Paper Prompts
Sign up for free to create and run prompts on this paper using GPT-5.
Top Community Prompts
Collections
Sign up for free to add this paper to one or more collections.