Papers
Topics
Authors
Recent
Search
2000 character limit reached

Progressive Hybrid Censoring

Updated 18 January 2026
  • Progressive Hybrid Censoring (PHC) is a flexible framework that generalizes Type-I, Type-II, and progressive censoring by allowing both pre-set failure targets and time limits with controlled removals.
  • It employs tailored likelihood functions with contributions from observed failures and removals, integrating frequentist and Bayesian inference methods like MCMC for precise parameter estimation.
  • PHC is widely applied in accelerated life testing and competing risks analysis, with simulation studies confirming its efficiency, reduced bias, and optimal design under complex experimental constraints.

Progressive Hybrid Censoring (PHC) is a class of censoring schemes that generalize traditional Type-I, Type-II, and progressive censoring by permitting both a pre-specified failure target and a fixed time cap, while optionally allowing removals of surviving units at intermediate failures. PHC schemes offer a flexible framework for reliability, survival, and accelerated life testing in the presence of complex experimental constraints, supporting both single- and multi-cause (competing risks) lifetime models and accompanying both frequentist and Bayesian inference approaches (Dutta et al., 2023, Konar et al., 11 Jan 2026, Asar et al., 2019, Koley et al., 2017).

1. Formal Definition and Variants

The canonical PHC design begins with nn test units, with the experiment progressing until either (a) mm failures have been observed or (b) a pre-set censoring time TT is reached; removals of survivors occur according to a user-defined removal vector R=(R1,...,Rm)R = (R_1, ..., R_m) satisfying structural constraints (e.g., i=1mRi=nm\sum_{i=1}^m R_i = n - m or nm\leq n - m depending on variant). There are several closely related schemes:

  • Progressive Type-II Hybrid Censoring: At each observed failure ii, RiR_i survivors are randomly censored; stopping occurs at min{Xm:n, T}\min\{X_{m:n},\ T\}, where Xm:nX_{m:n} is the mm-th failure time. If Xm:n<TX_{m:n} < T, precisely mm failures are observed; otherwise, only failures before TT contribute (Koley et al., 2017, Asar et al., 2019).
  • Adaptive Type-II Progressive Hybrid Censoring (AT-II PHC): This scheme adapts the removals and stopping rules after the time threshold TT is passed but before mm failures are observed. The removal vector RR is completed by setting Rd+1==Rm1=0R_{d+1}=\cdots=R_{m-1}=0, Rm=nmi=1dRiR_m = n - m - \sum_{i=1}^d R_i, where dd is such that Xd:m:n<T<Xd+1:m:nX_{d:m:n}<T<X_{d+1:m:n}, thereby ensuring exactly mm failures in all data realizations (Dutta et al., 2023).
  • Generalized PHC: Incorporates an additional parameter kk guaranteeing at least kk failures before termination, combining the minimum-failure, maximum-failure, and time-out policies (Koley et al., 2017).

This structural flexibility enables PHC to recover classical censoring as special cases (Type-I: m=nm=n, Ri=0R_i=0; Type-II: TT\to\infty, Ri=0R_i=0) and is crucial for balancing experimental efficiency and statistical power.

2. Likelihood Structure and Parameter Estimation

The likelihood under PHC incorporates contributions from observed failures, progressively censored removals, and possible random termination at TT:

L(θ;data)=[j=1Jf(x(j);θ)]  [S(x(J);θ)]nJi=1JRi  i=1J[S(x(i);θ)]RiL(\theta; \mathrm{data}) = \left[\prod_{j=1}^J f(x_{(j)};\theta)\right]\; \left[S(x_{(J)};\theta)\right]^{n-J-\sum_{i=1}^J R_i} \;\prod_{i=1}^J \left[S(x_{(i)};\theta)\right]^{R_i}

where f(;θ)f(\cdot;\theta) is the density, S(;θ)S(\cdot;\theta) the survivor function, JJ the number of observed failures (JmJ\le m), and RiR_i the number of removals at each failure (Koley et al., 2017).

For Weibull models the likelihood simplifies, and the log-likelihood for the two-parameter Weibull under PHC is

l(α,β)=rlnα+rlnβ+(α1)i=1rlnX(i)β[i=1r(1+Ri)X(i)α+RTCα]l(\alpha,\beta) = r\ln\alpha + r\ln\beta + (\alpha-1)\sum_{i=1}^r \ln X_{(i)} - \beta \left[\sum_{i=1}^r (1+R_i) X_{(i)}^\alpha + R_T C^\alpha\right]

with closed-form updates available for β\beta given α\alpha, and vice versa (Asar et al., 2019, Konar et al., 11 Jan 2026).

Score equations are solved via Newton–Raphson or EM-type algorithms; in the presence of missing data (progressively censored lifetimes), the EM and Stochastic EM (SEM) algorithms offer increased stability (Asar et al., 2019).

Under competing risks, the likelihood incorporates both failure times and cause indicators, as in Marshall–Olkin bivariate Weibull (MOBW) settings (Dutta et al., 2023): L(λ0,λ1,λ2,α)αmλ0m0λ1m1λ2m2λ012m3i=1myiα1exp[λ012A(α)]L(\lambda_0, \lambda_1, \lambda_2, \alpha) \propto \alpha^{m} \lambda_0^{m_0} \lambda_1^{m_1} \lambda_2^{m_2} \lambda_{012}^{m_3} \prod_{i=1}^m y_i^{\alpha-1} \exp\left[ -\lambda_{012} A(\alpha) \right] where mjm_j is the count of failures of cause jj, and A(α)A(\alpha) captures the cumulative effect of failure and censoring times (Dutta et al., 2023).

3. Bayesian Inference and Posterior Sampling

Bayesian analysis under PHC leverages conjugate/matching priors such as Gamma (for Weibull), and Beta–Gamma or Gamma–Dirichlet (for competing risk parameters):

  • For Weibull, independent Gamma priors on (α,β)(\alpha,\beta) yield tractable but nonstandard posteriors; estimators under squared error, LINEX, and generalized entropy losses are computed via multidimensional integration or MCMC (e.g., Metropolis–Hastings) (Asar et al., 2019).
  • For competing risks (MOBW or exponential), the Gamma–Dirichlet or Beta–Gamma prior yields conditionally tractable posteriors, with full-joint and marginal densities available for Gibbs or adaptive rejection sampling (Dutta et al., 2023, Koley et al., 2017).

Bayes estimators under PHC are typically computed as posterior means or as transformations thereof for alternative loss functions. Highest posterior density (HPD) intervals are constructed by sorting marginal MCMC samples and finding the shortest interval of the desired posterior mass (Dutta et al., 2023, Asar et al., 2019). Posterior convergence is routinely checked via multivariate Gelman–Rubin diagnostics (Dutta et al., 2023).

4. Properties, Optimal Design, and Large-Sample Theory

Maximum likelihood estimators (MLEs) and Bayes estimators under PHC are consistent and asymptotically normal under regularity conditions (fixed removal plan with rr\to\infty, removals not dominating sample size) (Konar et al., 11 Jan 2026). The observed Fisher information is computable in closed form for Weibull and competing-risks settings and is essential for constructing asymptotic confidence intervals and quantifying estimation precision (Dutta et al., 2023, Konar et al., 11 Jan 2026).

Optimal design of PHC schemes uses information-theoretic criteria calculated from the observed information matrix I(Θ^)I(\hat\Theta) evaluated at plug-in estimates:

  • A-optimality: minimize tr(I1)\operatorname{tr}(I^{-1}), the sum of parameter variances.
  • D-optimality: minimize det(I1)\det(I^{-1}), the generalized variance.
  • F-optimality: maximize tr(I)\operatorname{tr}(I), the total observed Fisher information.

Selection of (T,R)(T, R) to optimize these criteria yields schemes balancing efficiency, cost, and inferential quality (Dutta et al., 2023).

5. Simulation Studies and Empirical Performance

Monte Carlo studies consistently indicate the following (Dutta et al., 2023, Konar et al., 11 Jan 2026, Asar et al., 2019, Koley et al., 2017):

  • Both PHC and adaptive PHC (APHC) MLEs are nearly unbiased with decreasing MSE as sample size and number of failures increase.
  • EM-based MLEs outperform Newton–Raphson or SEM-based variants in terms of bias and MSE for Weibull models.
  • Bayes estimators outperform MLEs with gains amplified when informative or matching priors are used; LINEX and entropy-loss Bayes estimates exhibit reduced bias/MSE compared to squared-error Bayes.
  • HPD intervals are typically narrower than asymptotic intervals with comparable coverage; bootstrap intervals (when available) generally achieve shorter lengths and nominal coverage (Koley et al., 2017).
  • The performance of all estimators deteriorates as the proportion of missing or unknown causes increases.
  • APHC achieves slightly but consistently lower MSE than PHC for a fixed sample size and removal plan.

Key finite-sample metrics include average bias, mean squared error, average interval width, and coverage probability, all reinforcing the statistical reliability and robustness of PHC designs under moderate to large sample sizes (Dutta et al., 2023, Konar et al., 11 Jan 2026, Asar et al., 2019, Koley et al., 2017).

6. Applications to Accelerated Life Testing and Competing Risks

PHC is extensively used in accelerated life testing (ALT) with Weibull lifetime distributions and covariate-dependent (stress-dependent) parameterizations. A two-step estimation framework is standard: MLEs of Weibull parameters are obtained by PHC likelihood, and then regressed on stress covariates to estimate structural coefficients (e.g., via OLS with Murphy–Topel variance correction) (Konar et al., 11 Jan 2026).

In competing risks, PHC supports both independent and dependent cause models, as in Marshall–Olkin bivariate Weibull structures. The progressive-removal and hybrid stopping rules allow for flexible, efficient inference in multi-cause reliability studies and are particularly suited to experimental designs constrained by cost, time, or unit attrition (Dutta et al., 2023, Koley et al., 2017).

Practical analyses demonstrate the adequacy of PHC in real-world reliability studies, including soccer game event timing and traditional materials testing, with both MLE and Bayes point/interval estimates available and optimal censoring plans accurately identified via information criteria (Dutta et al., 2023, Konar et al., 11 Jan 2026, Asar et al., 2019, Koley et al., 2017).

7. Summary Table: Key PHC Elements

Aspect PHC Feature Reference
Stopping criterion min{Xm:n,T}\min\{X_{m:n}, T\}; observed failures JmJ \leq m (Koley et al., 2017, Konar et al., 11 Jan 2026)
Progressive removals RiR_i removals after ii-th failure; Rinm\sum R_i \le n-m (Koley et al., 2017, Dutta et al., 2023)
Likelihood structure Failure, removal, and survivor contributions; see Section 2 (Koley et al., 2017, Asar et al., 2019)
Bayesian priors Gamma, Beta–Gamma, Gamma–Dirichlet (model-dependent) (Dutta et al., 2023, Koley et al., 2017)
Estimation methods NR/EM/SEM for ML; MCMC/Laplace for Bayes; HPD intervals (Asar et al., 2019, Dutta et al., 2023)
Optimality criteria A-, D-, F-optimality using observed Fisher information (Dutta et al., 2023)
Empirical validation MC bias/MSE/interval/coverage; real data analyses (Dutta et al., 2023, Konar et al., 11 Jan 2026)

Comprehensive consideration of PHC and its variants enables applied researchers to design optimally-informative life tests and failure experiments under realistic physical and economic constraints, leveraging advanced frequentist and Bayesian inference tailored to complex experimental protocols (Dutta et al., 2023, Konar et al., 11 Jan 2026, Asar et al., 2019, Koley et al., 2017).

Topic to Video (Beta)

No one has generated a video about this topic yet.

Whiteboard

No one has generated a whiteboard explanation for this topic yet.

Follow Topic

Get notified by email when new papers are published related to Progressive Hybrid Censoring (PHC).