ML-Based Feasibility Assessment
- ML-based Feasibility Assessment is a technique that uses machine learning models to predict and guarantee solution feasibility within physical, computational, and economic constraints.
- It employs classification, regression, and decision-focused learning to quantify feasibility and suggest corrective actions in diverse domains such as power systems and process design.
- The methodology leverages explicit metrics like accuracy, MAE, recovery rates, and reliability scores to ensure trustworthiness and guide efficient decision-making.
Machine Learning–Based Feasibility Assessment
Machine learning–based feasibility assessment refers to the systematic evaluation, prediction, and guarantee of feasibility—defined with respect to physical, computational, economic, or operational constraints—using ML models as either decision aids, surrogate predictors, or direct components of optimization and decision-making pipelines. Applications span power systems security, process design, constrained optimization, risk assessment, site selection, insurance ratemaking, and adversarial threat modeling. The technical literature demonstrates a range of rigorous methodologies for (i) predicting feasibility of candidate solutions, (ii) suggesting corrective actions or minimal repairs, (iii) quantifying trustworthiness of ML predictions, and (iv) integrating feasibility awareness into training and deployment workflows.
1. Mathematical and Algorithmic Formulations
Across diverse application domains, ML-based feasibility assessment is formalized either as a (i) binary or multiclass classification problem (feasible/infeasible or categorical labeling), (ii) constrained regression where output must obey pre-defined feasibility domains, or (iii) a decision-focused learning setting where model parameters are directly optimized to maximize feasibility under downstream constraints.
In power system operation, for example, feasibility of operating points (states ) is defined by physical constraints: if and only if equality (e.g., load flow) and inequality (e.g., voltage, line rating) constraints are satisfied. ML models serve as feasibility classifiers or regressors of full solution vectors, trained on labeled (feasible/infeasible) or full simulation data (Schaefer et al., 2020, Mohammadian et al., 8 Apr 2025). In constrained optimization, if parameters in constraints must be predicted from context, decision-focused learning (DFL) optimizes for selection of such that the resulting predicted solution is feasible with high probability under the true unknown (Mandi et al., 6 Oct 2025).
In general, ML-based feasibility assessment involves:
- Sampling or collecting input–output pairs where is a binary feasibility label, or a continuous measure of constraint violation.
- Training a classifier, regressor, or surrogate model mapping input to the feasibility label/score or entire feasible solution.
- Evaluating performance using explicit metrics (accuracy, recall, regret, infeasibility rate, mean error).
Advanced algorithms additionally provide counterfactuals or minimal repairs if an input is labeled infeasible, or employ an explicit feasibility layer enforcing hard constraints post-hoc (Mohammadian et al., 8 Apr 2025, Ramesh et al., 2022).
2. Feasibility Metrics, Performance Measures, and Evaluation Protocols
Quantitative evaluation of ML-based feasibility assessment employs a core set of metrics including:
- Classification accuracy and recall: Fraction of correctly predicted feasibility labels; critical to minimize false negatives (infeasible cases predicted as feasible) in safety-critical environments (Schaefer et al., 2020, Mohammadian et al., 8 Apr 2025).
- Mean Absolute Error (MAE) / Root Mean Squared Error (RMSE): Used when feasibility is associated with continuous outputs (e.g., power flows, thinning fields) (Attar et al., 2021).
- Feasibility Recovery Rate: Percentage of infeasible cases for which the proposed ML-based corrective actions restore feasibility (Mohammadian et al., 8 Apr 2025).
- Regret and infeasibility trade-off: In constrained optimization with predicted constraint parameters, trade-off curves are constructed by tuning a scalar to balance regret (objective suboptimality) and infeasibility rate (Mandi et al., 6 Oct 2025).
- Resource consumption metrics: Memory, latency, energy profile of feasibility evaluation (critical for edge and real-time systems) (Wilhelmi et al., 2023).
- Calibration and reliability: Model-agnostic reliability indicators such as LADDR score give a normalized measure of trustworthiness based on distance to training support (Chen et al., 2023).
- Economic performance: In insurance or ratemaking contexts, metrics such as multiannual balance, volatility, premium fairness, and affordability are explicitly computed to confirm the economic feasibility of ML predictions (Biagini, 2022).
Tables summarizing such metrics (e.g., model accuracy, error rates, and feasibility guarantee rates) are standard.
3. Domain-Specific Approaches
The implementation details of ML-based feasibility assessment vary by application:
- Power systems contingency and OPF: Multilayer perceptrons (MLPs), decision trees, or boosting models map grid state vectors to feasibility labels or to full system outputs. Fast feasibility screening enables drastic reduction (by >90%) in the number of expensive physics-based simulations during planning and real-time operation. For infeasible states, counterfactual generation frameworks find minimal, sparse perturbations to restore feasibility, validated by re-solving the physics-based model (Schaefer et al., 2020, Mohammadian et al., 8 Apr 2025).
- Design and process manufacturing: Convolutional surrogates trained on simulation data predict process feasibility (e.g., material thinning) in near real time, enabling rapid assessment at the onset of new design cycles (Attar et al., 2021).
- Resource scheduling and MILP reduction: Feasibility layers—small repair MILPs—enforce hard constraints (e.g., min up/down times) on ML-predicted schedules, quantitatively removing infeasible solutions and maintaining high speedup and solution quality (Ramesh et al., 2022).
- Site selection (MCDM integration): Feature-importance scores from ensemble classifiers replace subjective weights in multi-criteria decision frameworks, providing objective, data-driven feasibility maps (Ahmed et al., 5 Apr 2025).
- Label quality and intrinsic learnability: Bayes error rate (irreducible error) is estimated from data, allowing practitioners to pre-screen the feasibility of achieving target accuracies (Renggli et al., 2020).
- Model-agnostic reliability and out-of-distribution detection: LADDR computes a per-sample reliability score via Laplacian decay from training data, allowing feasibility assessment relative to the support of observed data (Chen et al., 2023).
- Adversarial ML risk assessment: Frameworks such as FRAME compute composite feasibility scores for attacks by aggregating system, attack, and empirical success-rate features via rule-based and empirical models (Shapira et al., 24 Aug 2025).
4. Feasibility Restoration and Counterfactual Generation
For constraint-violating inputs, feasibility-aware ML systems can recommend minimal corrective actions that restore feasibility while preserving solution quality when possible. Methods include:
- Counterfactual explanation optimization: For infeasible inputs , solve for minimizing a combination of hinge loss on model output and a distance metric to , optionally with a diversity-promoting term (e.g., determinant point process). Postprocessing enforces sparsity—perturbing as few features as possible. The resulting counterfactuals are validated by rerunning the original constraint model (Mohammadian et al., 8 Apr 2025).
- Decision-focused loss balancing: Loss functions are constructed to penalize both infeasibility under the true parameters and exclusion of the true optimal solution from the predicted feasible set. Adjustable scalar parameters govern the balance between suboptimality and infeasibility (Mandi et al., 6 Oct 2025).
- Feasibility layers with postprocess repair: In complex integer programs, a lightweight postprocessing MILP corrects ML-predicted binaries to satisfy combinatorial constraints, followed by partial variable fixing to reduce problem size without loss of feasibility (Ramesh et al., 2022).
Quantitative results in these domains consistently report 100% recovery rates in feasibility when post-processing is used, and allow explicit tradeoffs between cost/optimality and feasibility by tuning model parameters.
5. Reliability, Trustworthiness, and Data Quality in Feasibility Assessment
Feasibility assessment in ML does not reduce solely to classifier or regressor accuracy; it critically depends on the trustworthiness and operational validity of the predictions:
- Out-of-distribution (OOD) detection: LADDR computes a reliability metric by comparing each new input against the training data manifold, flagging inputs as untrustworthy/extrapolative if falls below a stakeholder-defined threshold. Statistical robustness and parameterization via extrapolation diameter allow explicit control of acceptance rates and associated risks (Chen et al., 2023).
- Minimum Bayes risk estimation: The Snoopy algorithm estimates the intrinsic Bayes error rate using 1-NN classifiers over a broad set of feature transformations, producing an interpretable feasibility “go/no-go” binary and margin signal, robust to label noise and data limitations (Renggli et al., 2020).
- Adversarial risk quantification: Automated frameworks aggregate empirical attack success rates, system profiling, and execution-mode–dependent feasibility to produce composite risk scores for practical AML defense prioritization (Shapira et al., 24 Aug 2025).
In each case, the feasibility assessment is only as strong as the underlying data representativeness and the explicit characterization or control of extrapolation, distribution shift, and stochastic uncertainty.
6. Decision Support, Economic, and Engineering Impact
Rigorous feasibility assessment via ML supports data-driven decision support in several high-impact domains.
- Power system operations: Sub-second feasibility screening and prescriptive repair options support real-time operator-in-the-loop or automated grid control, reducing both operational costs and system risk (Schaefer et al., 2020, Mohammadian et al., 8 Apr 2025).
- Agricultural insurance ratemaking: ML-derived premium rules using boosting and LASSO produce sustained economic feasibility—affordable, fair, and stable premiums—while maintaining multiannual solvency and limiting volatility (Biagini, 2022).
- Site selection: Automated, objective weighing and selection in multi-criteria frameworks ensures decisions are robust, transparent, and generalize to new or unseen domains (Ahmed et al., 5 Apr 2025).
- Engineering safety and automation: Model-agnostic, real-time reliability measures (e.g., LADDR) ensure ML-based controllers or monitoring systems actuate only in trustworthy regime, passing untrusted regions safely to design-time deterministic logic (Chen et al., 2023).
- Cyber-physical and networked systems: Resource- and accuracy-aware feasibility assessment enables implementers to reliably deploy ML solutions on hardware-constrained edge devices or in adversarial threat models (Wilhelmi et al., 2023, Shapira et al., 24 Aug 2025).
These results demonstrate that ML-based feasibility assessment frameworks, when developed with explicit metrics, trustworthiness measures, and process integration, substantially advance both the practical applicability and operational reliability of data-driven decision making across disciplines.
7. Limitations, Open Challenges, and Future Directions
Despite demonstrated efficacy, limitations remain:
- Distribution shift and generalization: Most frameworks are reliable only within the convex hull of observed data; extrapolation or unseen regimes require explicit OOD filtering or retraining (Chen et al., 2023).
- Nonconvex, high-dimensional, or dynamic constraints: Current ML-based feasibility restoration primarily addresses convex problems (e.g., DC-OPF). Extending to full nonconvex models, large-scale combinatorial domains, or time-varying constraints is an active area of research (Mohammadian et al., 8 Apr 2025, Mandi et al., 6 Oct 2025).
- Meta-feasibility and intrinsic learnability: Estimation of minimum achievable error or feasibility given data quality and labeling is not trivial for regression or structured-output problems and is limited by current tools to classification and embedding-computable scenarios (Renggli et al., 2020).
- Interpretable weighting and feature selection: While tree-based feature importances are standard, unobservable or domain-extrinsic factors may still confound automated site selection or risk ranking (Ahmed et al., 5 Apr 2025, Shapira et al., 24 Aug 2025).
- Operational risk and adversarial threats: Integrating empirical risk estimation with formal robustness guarantees or certified adversarial training is ongoing (Shapira et al., 24 Aug 2025).
Future progress is likely to capitalize on adaptive, dynamically updated feasibility assessments (including continual learning), end-to-end differentiable pipelines, and seamless integration with human- or operator-in-the-loop systems for optimal tradeoffs in safety, efficiency, and performance.