Power-Sampling Methods: Theory and Applications
- Power-sampling methods are techniques that select samples based on statistical power or energy efficiency, enhancing estimation and reconstruction even in rare event scenarios.
- They employ frameworks like mixture-importance sampling, adaptive control algorithms, and D-optimal experimental design to achieve low variance and computational efficiency.
- Applications include grid reliability, embedded control, compressive spectrum estimation, and ADC energy harvesting, delivering significant improvements in performance and cost reduction.
Power-sampling methods encompass a range of sampling frameworks, estimators, and optimization strategies in which the statistical power, energetic efficiency, or signal power considerations fundamentally guide the sample selection, acquisition rate, or reconstruction algorithm. These techniques have broad application in signal processing, power system reliability, embedded control, compressive sensing, and statistical experimental design. Below, key frameworks and methodological advances from the literature are synthesized, with an emphasis on the probabilistic, algorithmic, and physical principles underpinning power-sampling.
1. Rare Event Importance Sampling and Power System Reliability
A prominent application of power-sampling arises in the estimation of rare event probabilities in high-dimensional systems, such as reliability assessment in electric grids. The ALOE mixture-importance-sampling algorithm ("At-Least-One-rare-Event"), as formalized in Owen & Zhou (2019), offers an unbiased estimator for the probability of the union of rare events defined over a random variable (Owen et al., 2017).
Given event probabilities and union bound , the mixture-sampling density is
Each sample is weighted by where is the number of active rare events at . Key properties include:
- Unbiasedness: .
- Sharp variance bound: .
- Relative error bounded by . This method achieves empirical CV with samples on events with probabilities under thousands of constraints.
These power-sampling schemes underpin adaptive and scenario-based formulations in chance-constrained DC-OPF for grid security (Lukashevich et al., 2021), where sample-efficient scenario generation is achieved by mixture importance sampling conditional on constraint violations.
2. Power-aware Adaptive Sampling for Embedded Control
In embedded control, sampling rate regulation is critically linked to control performance and power consumption. The adaptive regulation algorithms developed by Naskar et al. pursue online selection of control task sampling periods to minimize quadratic LQG cost under an energy budget (Raha, 2018):
- Mathematical model: sampling period for disturbance/noise level ; total energy and average control cost .
- Dominance pruning and greedy-knapsack algorithms select a mapping .
- Optimality guarantees for pruning; complexity for greedy heuristic.
- Quantitative results: 10–30% cost improvement over fixed-rate controllers, up to 50% power reduction, and substantial battery-life extension (up to $17$ months).
Assumptions are linear time-invariant dynamics and constant per-sample energy; limitations of the current approaches involve scalability to non-linear plants, absence of delay modeling, and lack of formal worst-case guarantees for greedy heuristics.
3. Compressive and Noncompressive Power Spectrum Sampling
Blind and non-blind sub-Nyquist power-spectrum estimation is a key area for power-sampling, notably in cognitive radio and wideband spectrum sensing (Cohen et al., 2013, Lexa et al., 2011). In compressive spectral estimation, sampling patterns are chosen to optimize recovery guarantees or minimize system complexity:
- Multi-coset (periodic nonuniform) sampling: Piecewise-constant PSD is estimated from channel samples at average rate .
- Under sparsity, compressive estimators (standard , nonnegative least squares) exploit measurement matrices with subsampled DFT structure; noncompressive (least squares) estimators rely on full-rank system matrices.
- The minimal sampling rate for exact reconstruction varies:
- Non-sparse: .
- Sparse, support known: (half Landau rate).
- Sparse, support unknown (blind): .
- NNLS is particularly efficient if the power spectrum is nonnegative and sparse. Algorithmic complexity, recovery error bounds, and trade-offs in resolution versus sampling rate are extensively characterized.
Recent advances in generalized coprime sampling reconstruct autocorrelation and power spectrum from sub-Nyquist samples via FFT-based algorithms. These methods maintain performance and speed for high-dimensional, distributed, or real-time applications (Jiang et al., 2023).
4. Statistical Experimental Design: Power in Sample Selection
Optimal sampling design for regression and time series under computational or energy constraints leverages the criterion of statistical power—in this context, the information matrix determinant ("D-optimality"). Streaming leverage-score sampling optimizes information acquisition under a sampling rate constraint (Xie et al., 2023):
- Sampling probability enforces D-optimality for i.i.d. elliptical covariates.
- Mixture rules combine hard thresholding with a Bernoulli baseline for robustness.
- Empirically, leverage-score and relaxed-leverage sampling halve estimation error and prediction RMSE vis-à-vis baseline random sampling, with a $10$– reduction in floating-point cost.
This methodology is shown to outperform uniform random sampling in online grid load estimation/prediction, and extensions to heavy-tailed or heteroscedastic settings are empirically validated.
5. Physical Power and Sampling: ADC Energy Harvesting
The "eSampling" framework establishes a direct link between signal power, sampling rate, and energy harvest/consumption in ADC architectures (Jain et al., 2020). Key results include:
- For bandlimited WSS inputs, sampling at Nyquist rate with SAR ADCs up to $12$ bits allows operation in net-zero or energy-positive regimes.
- Analytic fidelity–harvest tradeoffs: sampled signal distortion (NMSE) versus harvested energy and conversion cost , given by
- Circuit implementation with 65 nm CMOS corroborates theory: measured energy ratio dB at 8 bit, 40 MHz rates. Guidelines are provided for sampling rate and quantization setting selection under joint fidelity and energy harvest constraints.
6. Sampling Strategies, Correlation Structures, and Coverage
Power-sampling is applied in the context of training data generation for power grid surrogates (Balduin et al., 2022):
- The Correlation Sampling algorithm enforces strong dependencies in the sample generation space by adjusting samples according to partial correlation matrices computed from historical data.
- This approach achieves superior coverage of feasible operational states, covering of the convex hull of real data (vs. for copula methods).
- It differentiates itself from SRS and LHS by retaining space-filling properties and realistic dependencies while being less restrictive than full joint-copula sampling.
Empirical evaluation highlights the benefits in ML surrogate training, with improved generalization and reduction in coverage bias.
7. Advanced Statistical Power Estimation: Bioequivalence Simulation
Segment-based simulation for bioequivalence test power curves exploits inverse-CDF mapping to focus simulation effort efficiently (Hagar et al., 2023):
- Power as a tail probability in the sampling distribution is approximated by mapping test statistics onto rejection regions, then root-finding sample-size boundaries per simulation draw.
- The approach achieves unbiased sample-size recommendations and up to speedup over brute-force simulation, applicable across a variety of clinical and nonparametric designs.
The combination of quasi-Monte Carlo sampling and efficient root-finding ensures both statistical validity and computational tractability.
Power-sampling methods define a geometric and probabilistic framework where sample selection is dictated not only by classical statistical efficiency, but also by physical power constraints, rare event structure, spectral energy densities, and correlation topology. Across disciplines—statistical inference, control, grid reliability, spectrum sensing, and hardware design—these principles enable scalable, energy-efficient, statistically powerful sampling strategies.