Debiased Machine Learning when Nuisance Parameters Appear in Indicator Functions
Abstract: This paper studies debiased machine learning when nuisance parameters appear in indicator functions. An important example is maximized average welfare gain under optimal treatment assignment rules. For asymptotically valid inference for a parameter of interest, the current literature on debiased machine learning relies on Gateaux differentiability of the functions inside moment conditions, which does not hold when nuisance parameters appear in indicator functions. In this paper, we propose smoothing the indicator functions, and develop an asymptotic distribution theory for this class of models. The asymptotic behavior of the proposed estimator exhibits a trade-off between bias and variance due to smoothing. We study how a parameter which controls the degree of smoothing can be chosen optimally to minimize an upper bound of the asymptotic mean squared error. A Monte Carlo simulation supports the asymptotic distribution theory, and an empirical example illustrates the implementation of the method.
- Riccardo D’Adamo, 2023. Orthogonal Policy Learning Under Ambiguity. arXiv:2111.10904.
- Alejandro Sanchez-Becerra, 2023. Robust Inference for the Treatment Effect Variance in Experiments Using Machine Learning. arXiv:2306.03363.
- Vira Semenova, 2023. Generalized Lee Bounds. arXiv:2008.12720.
- Vira Semenova, 2023. Adaptive Estimation of Intersection Bounds: a Classification Approach. arXiv:2303.00982.
Paper Prompts
Sign up for free to create and run prompts on this paper using GPT-5.
Top Community Prompts
Collections
Sign up for free to add this paper to one or more collections.