- The paper introduces a novel heuristic integrating low-precision first-order methods into fix-and-propagate frameworks for large-scale MIP solving.
- It demonstrates that using the PDLP approach achieves over 20% optimality gap reduction and 2–3x speedup compared to traditional IPM-based relaxations.
- Empirical results on MIPLIB benchmarks and energy system models validate the method’s scalability and effectiveness in GPU-accelerated environments.
Low-Precision First-Order Method-Based Fix-and-Propagate Heuristics for Large-Scale MIP
Problem Statement and Motivation
The study centers on advancing heuristic techniques for mixed-integer linear programs (MIPs) by leveraging first-order methods (FOMs) to obtain low-precision LP relaxations and integrating these within fix-and-propagate (FP) frameworks. The motivation is grounded in the limitations of classical LP solvers (Simplex and Interior-Point Methods, IPMs), both in speed and scalability, particularly for large-scale instances arising in energy system optimization models (ESOMs) with millions of variables and constraints. Since real-world problem data often contain uncertainties and modest optimality gaps are acceptable, the paper questions whether high-precision LP solutions are necessary for effective MIP heuristics and explores FOMs as a computationally advantageous substitute, especially in the context of GPU acceleration.
First-Order Methods and LP Relaxations
FOMs, specifically the Primal-Dual Hybrid Gradient (PDHG) method as implemented in PDLP [ApplegatePDLP], are characterized by their exclusive reliance on matrix-vector products and their suitability for GPU hardware. Unlike IPMs, they do not require matrix factorizations, making them highly scalable. However, they are typically only capable of low-to-moderate accuracy. The paper demonstrates the feasibility of employing PDLP for solving the LP relaxation of MIPs at low precision, resulting in significant computational savings when compared to classical methods.
Fix-and-Propagate Heuristic Framework
The FP heuristic operates by iteratively fixing integer variables and applying domain propagation to reduce search space—potentially detecting infeasibility early or tightening bounds for other unfixed variables. The heuristic is initialized by solving the LP relaxation of the MIP to obtain primal and dual solutions, then these are used to inform variable selection and fixing strategies via four LP-based FP variants:
- Type: Orders variables by domain/type, requiring minimal LP information.
- Frac: Leverages fractionality in the LP solution as a selection criterion.
- RedCost: Exploits reduced cost information to prioritize fixings.
- Dual: Incorporates dual variable and constraint activity into variable selection.
The fixing strategies, selection mechanics, backtracking, and propagation are agnostic to the underlying LP solver but benefit from information extracted via (approximate) LP relaxations.
Numerical Evaluation and Results
The study conducts extensive computational experiments on MIPLIB 2017 and large-scale ESOM instances from the REMix framework. Key empirical findings include:
- MIPLIB Benchmarks: FP heuristics guided by low-precision PDLP and high-precision IPM relaxations exhibit comparable solution quality (optimality gaps), with an average gap reduction of over 20% versus uninformed heuristics. Time savings are substantial with PDLP; reduced accuracy yields 2–3x speedup.
- Unit Commitment Energy System Models: For models scaling up to 243 million non-zeros and 8 million variables, PDLP-based FP heuristics achieve gaps below 2% within 4 hours, outperforming commercial MIP solvers (Gurobi, CPLEX, COPT) that fail to produce feasible solutions within two days. The gap remained robust with lower LP solution accuracy, unchanged from high-accuracy settings.
- LP Solver Comparison: PDLP is increasingly competitive as model size grows, often outperforming IPMs, especially when IPMs reach memory or compute bottlenecks.
These results rigorously support the claim that low-accuracy FOM-based LP relaxations are valuable for large-scale MIP heuristics, both in solution quality and computational efficiency, with particular benefits for GPU-enabled environments.
Practical and Theoretical Implications
The study establishes that FOMs, especially GPU-accelerated PDLP, can serve as an LP solver within primal heuristic frameworks for MIPs, without any observable degradation in feasible solution quality compared to classical implementations. This overturns traditional assumptions requiring high-precision LP relaxations for effective MIP heuristics. The fast, scalable computation uncovers new tractable domains, notably for real-world, high-dimensional energy system models.
Theoretically, this revises the guidance for embedding LP relaxations in MIP heuristics and motivates further exploration of FOMs for guiding search, diving, and neighborhood search. The compositional design of fix-and-propagate heuristics enables additional integration with dual information, reduced costs, and propagation mechanisms, reinforcing a versatile paradigm for heuristic MIP solving.
Future Prospects
The findings prompt further development in several directions:
- Fully GPU-native FP implementations, including domain propagation.
- Expansion of FOM-based heuristic schemes to other classes of combinatorial optimization beyond classical MIPs.
- Revisiting traditional MIP solver components (branching, diving, neighborhood search) with FOM guidance.
- Integration with machine learning-based approaches for variable selection and fixing [bengio_machine_2020, Gasse2019].
- Broad adoption in ESOMs, logistics, and industrial applications requiring rapid generation of high-quality feasible solutions.
Conclusion
The paper demonstrates that low-precision FOMs can successfully substitute classical LP solvers in FP heuristics for MIPs, allowing scalable treatment of ultra-large optimization instances while preserving solution quality. This marks a significant advance in both practical optimization for large-scale applications and theoretical understanding of LP solver requirements within heuristic frameworks. The work establishes a foundation for future efforts in leveraging FOMs and GPU acceleration in mixed-integer optimization (2503.10344).