Papers
Topics
Authors
Recent
Search
2000 character limit reached

Honest data-adaptive inference for the average treatment effect under model misspecification using penalised bias-reduced double-robust estimation

Published 12 Aug 2017 in stat.ME | (1708.03787v1)

Abstract: The presence of confounding by high-dimensional variables complicates estimation of the average effect of a point treatment. On the one hand, it necessitates the use of variable selection strategies or more general data-adaptive high-dimensional statistical methods. On the other hand, the use of such techniques tends to result in biased estimators with a non-standard asymptotic behaviour. Double-robust estimators are vital for offering a resolution because they possess a so-called small bias property (Newey et al., 2004). This means that their bias vanishes faster than the bias in the nuisance parameter estimators when the relevant smoothing parameter goes to zero, making their performance less sensitive to smoothing (Chernozhukov et al., 2016). This property has been exploited to achieve valid (uniform) inference of the average causal effect when data-adaptive estimators of the propensity score and conditional outcome mean both converge to their respective truths at sufficiently fast rate (e.g., van der Laan, 2014; Farrell, 2015; Belloni et al., 2016). In this article, we extend this work in order to retain valid (uniform) inference when one of these estimators does not converge to the truth, regardless of which. This is done by generalising prior work for low-dimensional settings by Vermeulen and Vansteelandt (2015) to incorporate regularisation. The proposed penalised bias-reduced double-robust estimation strategy exhibits promising performance in extensive simulation studies and a data analysis, relative to competing proposals.

Summary

No one has generated a summary of this paper yet.

Paper to Video (Beta)

No one has generated a video about this paper yet.

Whiteboard

No one has generated a whiteboard explanation for this paper yet.

Open Problems

We haven't generated a list of open problems mentioned in this paper yet.

Continue Learning

We haven't generated follow-up questions for this paper yet.

Collections

Sign up for free to add this paper to one or more collections.