Proximal Off-Policy Evaluation
- Proximal off-policy evaluation is a set of techniques that estimate the value of target policies in reinforcement learning by leveraging proxy variables to address unobserved confounding.
- It employs recursively defined bridge functions and nonparametric identification methods to resolve integral equations and mitigate biases in partially observed systems.
- Recursive estimation algorithms using NPIV regression and regularization yield finite-sample error bounds and achieve semiparametric efficiency under practical constraints.
The proximal off-policy evaluation (OPE) approach constitutes a class of techniques for policy value estimation in reinforcement learning (RL) when data are collected under a behavior policy potentially confounded by unobserved state variables. Traditional off-policy evaluation methods assume full observability, yet real-world domains—particularly in healthcare, education, and decision support—frequently present unmeasured confounding and partially observed Markov decision processes (POMDPs). Proximal OPE leverages proxy variables and nonparametric identification theories from proximal causal inference to provide robust, theoretically justified estimation procedures in POMDPs and Markov decision processes with unobserved confounders.
1. Formal Problem Setup and Motivation
Proximal OPE seeks to estimate the value of a target policy using trajectory data generated from a possibly confounded behavior policy , where the underlying environment is a POMDP or MDP with latent confounders. At each time , the process is described by:
- Latent state
- Observed proxy or partial state
- Action
- Reward (episodic POMDP), in general
Data are sampled as or, more generally, as full trajectories, without observations of . The goal is to evaluate
where actions are replaced by those from in the evaluation. Standard OPE approaches are biased in this setting due to confounding by . The proximal approach introduces time-dependent proxy variables—usually, an action-inducing and a reward-inducing —that are constructed to "break" confounding through specific conditional independence relations (Miao et al., 2022, Bennett et al., 2021, Bennett et al., 2020).
2. Proximal Identification and Bridge Functions
The cornerstone of the proximal OPE framework is the identification of the policy value by a sequence of recursively-defined bridge functions. In the episodic POMDP case, this involves the -bridge functions and -bridge functions satisfying
and
The -bridge functions satisfy linear integral equations in observed variables and proxies,
with boundary . The policy value is ultimately expressed as
enabling evaluation solely from observed and proxy data (Miao et al., 2022). An analogous framework holds for infinite-horizon settings with stationary distribution ratios and for the doubly robust bridge-based representations (Bennett et al., 2021, Bennett et al., 2020).
3. Key Assumptions and Identification Conditions
Orthodox proximal identification relies on several structural and statistical assumptions:
- Markovianity: Transitions in are Markov.
- Proxy-Conditional Independences: Observed proxies , are chosen such that, for each ,
- Completeness (Nonparametric IV Condition): For all , if then , and similarly for .
- Support (Overlap): whenever (Miao et al., 2022, Bennett et al., 2021). These conditions collectively ensure that the relevant integral equations admit unique solutions for the bridge functions and are empirically estimable.
For infinite-horizon settings with i.i.d. confounders, additional stationarity and ergodicity conditions are imposed, and proxies are used to define observable moment equations for the stationary density ratios (Bennett et al., 2020).
4. Estimation Algorithms and Computational Aspects
Estimation proceeds recursively, typically employing a fitted--evaluation-type approach in the episodic POMDP case:
- Backward Recursion: For , estimate via a nonparametric instrumental variable (NPIV) regression by solving empirical analogues of the bridge equations, e.g.,
where .
- Function Classes: and are chosen as RKHS balls (Gaussian/Sobolev kernels) for nonparametric flexibility; finite polynomial bases are suitable for small or discrete settings.
- Regularization: Tuning parameters are selected via cross-validation or Lepskii’s method.
- Computational Complexity: The bottleneck is often per recursion step in naive kernel methods, mitigable with random features or Nyström approximations.
For infinite-horizon confounded MDPs, two-stage procedures are used: first, estimate stationary density ratios via proxy-based moment equations; then, employ optimal balancing via min–max estimation over weights and potential reward functions (Bennett et al., 2020).
A concrete pseudocode summary for the fitted--evaluation NPIV procedure is provided in (Miao et al., 2022), and for the balancing-weight approach in (Bennett et al., 2020).
5. Theoretical Guarantees and Error Analysis
Proximal OPE methods offer sharp, finite-sample statistical guarantees:
- Finite-Sample Error Bounds (Episodic POMDP):
where and are measures of local and transition ill-posedness, characterizes RKHS eigenvalue decay, and is sample size (Miao et al., 2022).
- Semiparametric Efficiency (Bridge-based PRL): Proximal RL estimators attain the semiparametric efficiency bound and are -regular and asymptotically normal under mild nuisance estimation rates (Bennett et al., 2021).
- Consistency (Infinite Horizon): Under assumptions including ergodicity and completeness of function classes, the bias and variance of the policy value estimator converge at (Bennett et al., 2020).
These results highlight the robustness of the proximal approach to latent confounding and its optimality under appropriate regularity.
6. Practical Recommendations and Observed Performance
- Proxy Construction: The utility of the approach depends critically on the availability of action- and reward-inducing proxies (, ), chosen based on negative control principles or domain knowledge.
- Kernel and Function Class Selection: RKHS-based methods with adaptive kernel width and radius are preferred for complex, high-dimensional data; finite-dimensional bases suffice in low dimensions.
- Regularization and Cross-Validation: Regularization in both the NPIV and balancing-weight phases is essential for stability; cross-validation is commonly used for tuning.
- Computational Tradeoffs: Proximal OPE is computationally more demanding than standard OPE in fully observed MDPs, requiring solution of sequential NPIV or min–max optimization problems; random features and advanced kernel approximations are often leveraged for scalability.
- Empirical Findings: Simulation experiments and real-world case studies (e.g., sepsis management) demonstrate that proximal estimators substantially reduce bias relative to standard OPE, especially under moderate to severe unmeasured confounding. Performance is competitive with, and often superior to, existing methods under partial observability (Bennett et al., 2021, Bennett et al., 2020).
7. Connections and Distinctions within Off-Policy Evaluation
Proximal OPE diverges fundamentally from traditional importance sampling, direct modeling, and standard doubly robust methods by replacing direct observability or ignorability assumptions with proxy-based negative control and nonparametric completeness. It is distinct from "proximal policy optimization" (PPO) and related trust-region methods, which use the "proximal" term in the context of optimization regularization rather than identification or confounding adjustment. Proximal bridge-based approaches unify ideas from proximal causal inference, instrumental variable estimation, and kernel-based identification, and have inspired methodologies for both finite- and infinite-horizon RL in settings plagued by hidden confounders (Miao et al., 2022, Bennett et al., 2021, Bennett et al., 2020).