Structured Bottlenecks for Missing Data
- The paper introduces a structured bottleneck framework that compresses covariate blocks and preserves relevant information for robust treatment effect estimation.
- The proposed method employs block-specific information bottleneck objectives and differentiable encoders to effectively handle systematic missingness at test time.
- Empirical evaluations demonstrate state-of-the-art causal inference performance and estimation consistency under various missing data regimes.
Structured bottlenecks for missing data denote formal approaches that leverage explicit information-theoretic or statistical structures to enable robust estimation, causal inference, and model selection in the presence of patterned or block-wise missingness. Such bottlenecks are designed to compress observed covariates into discrete or low-dimensional codes while maximally retaining relevant information for downstream tasks, and to enable principled transfer of learned representations to prediction or inference when covariate blocks are systematically absent—especially during test or deployment. Key methodologies include deep information bottleneck objectives partitioned by block-wise missingness, discrete clustering, multi-source imputation, and optimal integration of block-specific estimating equations.
1. Structured Information Bottleneck Objectives
Structured bottlenecks arise from information bottleneck (IB) principles formalizing selective compression. In the Cause-Effect Deep Information Bottleneck (CEIB) approach (Parbhoo et al., 2018), the covariate vector is explicitly partitioned into two blocks: (available at train time, systematically missing at test time) and (always available). Discrete latent codes and are learned for and , respectively; these are concatenated as . The structured IB objective optimally compresses each block while preserving information about the target :
with block-specific compression terms
and analogous definition for . The relevance term is lower-bounded by the expected decoder log-likelihood over sampled cluster assignments. This split structure ensures that information discard and retention are precisely controlled per observed covariate block.
2. Encoder, Decoder, and Cluster Design
The CEIB method implements structured encoders for each block: and , each mapping inputs to logits, followed by categorical sampling via Gumbel-softmax reparameterization to achieve differentiability. Bottleneck thus indexes a discrete grid of equivalence-class clusters. Decoders estimate for treatment assignment (Bernoulli output) and for outcomes (Gaussian with -dependent mean heads and ). This structure ensures interpretability: each cluster encodes a distinct treatment-outcome profile under covariate compression. By segmenting cluster codes according to block-wise missingness patterns, structured bottlenecks facilitate prediction with incomplete test-time data (Parbhoo et al., 2018).
3. Handling Systematic Block-wise Missingness at Test Time
CEIB explicitly supports transfer of learned cluster structure when critical covariate blocks are absent at deployment. When is missing, the encoder is used to assign the test case to its cluster, while is imputed via several strategies:
- Prior plug-in: Assign as the most probable code under the learned .
- Cluster averaging: Average predicted outcomes or cluster effects across possible , weighted by .
- Mode of joint clusters: Select maximizing joint cluster occupancy in training.
This results in mapping each incomplete case to its most probable or averaged equivalence class, from which treatment effect estimates are read off. Aggregation over allows recovery of a purely -dependent cluster effect. This structured test-time transfer yields reliable treatment effect estimation under systematically missing covariates, demonstrated to achieve state-of-the-art performance on causal inference benchmarks and a sepsis application (Parbhoo et al., 2018).
4. Multi-source Block-wise Imputation via Estimating Equations
Alternatively, integrating multi-source block-wise missing data in model selection addresses missingness by generating multiple conditional imputations from both complete and partially observed blocks, forming the basis for efficient estimation (Xue et al., 2019). Here, the data are partitioned into disjoint groups according to missing-pattern; for each group , and each imputation group , conditional means are fit for missing block variables. Completed covariate vectors are constructed per imputation, which then enter estimating equations:
Stacking these yields the full system , which is integrated via a penalized generalized method-of-moments (GMM) objective for joint estimation and variable selection:
where denotes a nonconcave penalty, typically SCAD. This structured imputation utilizes all available block-sources and achieves asymptotic efficiency gains over single complete-case imputation (Xue et al., 2019).
5. Optimization and Hyperparameter Selection
Training structured bottleneck models involves differentiable Monte Carlo estimation of the decoder log-likelihood (for relevance term) and closed-form computation of block-specific KL terms (for compression). Both CEIB and MBI frameworks employ stochastic gradient descent—often Adam with learning rate near —on mini-batches of block-partitioned data (Parbhoo et al., 2018, Xue et al., 2019). For penalized GMM objectives, conjugate-gradient minimization is applied, with principal-component extraction to stabilize sample covariance matrix inversion. Hyperparameter balances the compression–relevance trade-off and is tuned via cross-validation on prediction or causal effect error (e.g., ACE error for CEIB, BIC-type criterion for MBI).
6. Practical Impact and Empirical Evaluation
Structured bottlenecks demonstrate practical effectiveness across simulation regimes and real-world applications. CEIB attains reliable and interpretable treatment effect estimates for incomplete covariate settings without sacrificing benchmark performance compared to competing approaches (Parbhoo et al., 2018). MBI achieves estimation and model selection consistency under both fixed and high-dimensional regimes, selection sparsity, and asymptotic normality (Xue et al., 2019). In biomedical applications, MBI selects biomarkers corroborated by external studies with test RMSE reductions of 20–25% over single-imputation and competing methods, robust to Missing-At-Random, Missing-Completely-At-Random, and informative missingness (Xue et al., 2019). This suggests that exploiting missingness structure via bottlenecks or multi-source imputation provides substantial efficiency and accuracy gains.
7. Theoretical Guarantees and Limitations
Both CEIB and MBI frameworks offer theoretical guarantees on consistency, efficiency, and recovery of sparsity. In fixed-dimensional settings, estimation error is achieved, with improved covariance bounds compared to single imputation (). For diverging dimensions, rate conditions ensure local minimizer existence and sparsity. A plausible implication is that strict block-wise partitioning and exploitation of all informative subsources are key to optimal statistical power in high missingness regimes (Xue et al., 2019). Limitations include the need for suitable regularity in imputation models and identifiable covariance structures, as well as trade-offs in cluster granularity and interpretability depending on compression parameterization.
Structured bottlenecks constitute a principled, theoretically supported approach to systematically missing data, enabling robust causal inference and model selection through explicit exploitation and transfer of data structure.