Incremental Solving: Techniques and Applications
- Incremental solving is a method in constraint reasoning that reuses artifacts like learned clauses and partial models to efficiently solve evolving problem instances.
- It employs techniques such as push/pop frame stacks and selector literals to add or remove constraints without restarting the solving process from scratch.
- Empirical results demonstrate significant reductions in runtime, backtracks, and assignments, proving its scalability in diverse fields like model checking and neural PDE simulation.
The incremental solving approach is a methodology in constraint reasoning, formal verification, optimization, planning, and learning where a solver processes a sequence of closely related problem instances by reusing previously computed information—such as learned clauses, partial models, or decomposed structures—across incremental changes to the problem. Rather than solving each instance from scratch, incremental solving accumulates or modifies constraints and retains key artifacts of the solver state through each step, thereby reducing computational redundancy and often yielding significant speedups.
1. Core Principles and General Workflow
Incremental solving is predicated on the observation that many applications—model checking, planning, optimization under changing objectives, SAT-based symbolic execution, and neural PDE solvers—naturally yield a sequence of related constraint systems where most constraints remain unchanged between steps. By maintaining a persistent solver state, such as internal clause databases, variable assignments, and learned lemmas, incremental solvers amortize the overhead of repeated preprocessing, clause learning, and search across these sequences.
A canonical workflow in an incremental SAT or QBF solver comprises:
- Initialization with a base set of variables and constraints.
- Iteratively adding (or sometimes removing) clauses, constraints, or variable domains to encode new problem instances.
- Solving each resulting incremental instance, often under new assumptions or with new objectives.
- Retaining the relevant solver-internal structures for reuse—e.g., clause databases, watch lists, learned cubes.
- Selectively disabling or retracting constraints using mechanisms such as frame-based stacks or selector literals when problem elements change.
This strategy applies equally to problems where the increment is an extension (e.g., relaxing bounds or planning horizons) or a tightening (e.g., constraining resource budgets or cardinality limits).
2. Incremental Solving in QBF, SAT, and SMT
The prototypical case of incremental solving is found in QBF and SAT-based planning and verification workflows. In QBF-based conformant planning (Egly et al., 2014), each candidate plan horizon is encoded as a quantified Boolean formula . As increases, new variables and transition clauses are added, while existing variables and clauses are preserved. This is realized via a push/pop frame stack, where only goal-related clauses must be added/removed at each horizon increment, and all variables and transition structure are installed up front.
For SAT/SMT solving, incremental APIs (e.g., IPASIR) allow clauses and assumptions to be posted or retracted between calls. In symbolic execution (Chaudhary et al., 2019), every program path condition is built up as a series of logical fragments added incrementally as the executor explores each new branch or loop. Crucially, infeasible program states can be pruned immediately, and much of the SAT solver's clause database is reused through the search, reducing per-branch runtime.
Assumptions and selector variables are vital: temporary constraints are paired with dedicated literals so that their effects can be enabled or disabled in future steps—allowing for both monotonic model extension and efficient constraint retraction (Koçak et al., 2020, Lonsing et al., 2014). In all cases, the key is that the search history for portions of the problem that do not change is preserved, dramatically reducing the number of redundant backtracks, assignments, and BCP operations.
3. Formal Structure and Examples
3.1 QBF-based Sequential Incrementality
Consider the conformant planning encoding in PCNF:
where variable blocks and the propositional matrix grow or are modified incrementally. The algorithmic skeleton for this is:
1 2 3 4 5 6 7 |
Initialize solver with all variables and quantifier prefix.
Push frame for transitions (f0), another for the current goal (f1).
For each horizon k:
Add transition/goal clauses as needed.
Solve.
If SAT: extract model.
Else: pop and replace goal in f1 for k+1. |
Learned clauses from the transition and initial state are kept indefinitely, and only new goal clauses are changed at each increment.
3.2 Incremental SAT/SMT via Assumptions
The SAT assumption API lets constraints be associated with ephemeral literals. For an optimization sequence minimizing , one posts, for each value :
and calls solve({a_k}), retracting the old as thresholds decrease (Koçak et al., 2020). Learned clauses unaffected by any assumption remain available throughout all instances.
3.3 Symbolic and Model-Checking Contexts
In incremental symbolic execution (Chaudhary et al., 2019), two main strategies are evident:
- Partial Incremental (PI) Mode: A new solver instance is created per active path, each path is extended incrementally, but full context is discarded on backtrack.
- Full Incremental (FI) Mode: One global solver tracks all branches using activation literals; constraints are enabled or disabled by adjusting the assumption set.
DFS with PI mode is particularly memory efficient, as only the currently active path's context lives in memory.
4. Optimization and Machine Learning Applications
Incremental solving extends beyond propositional and first-order logic. In large-scale convex or nonconvex optimization, methods such as HAMSI (Kaya et al., 2015) utilize incremental quadratic (quasi-Newton) updates across blocks of partially separable functions. For each block, a local quadratic proxy is formed and minimized, and the global Hessian approximation is only updated at the end of each cycle.
Incremental approaches are also leveraged in neural operator learning for PDEs (George et al., 2022). Here, the Incremental Fourier Neural Operator (iFNO) curriculum progressively increases both frequency modes and input resolution. The training schedule:
- Trains on a coarse spatial grid and limited spectral capacity.
- Monitors "explained ratio" for frequency strengths, growing spectral capacity as needed.
- Increases spatial resolution in staged increments.
- Learns low-frequency, large-scale features early with low cost before refining model capacity.
This regime yields dramatically lower training and inference FLOP cost, as well as improved generalization.
5. Formal Analysis, Correctness, and Limitations
A primary concern in incremental solving is ensuring soundness and completeness as constraints are added or retracted. In QBF, frame-based or selector-based tagging ensures that learned clauses and cubes remain valid only so long as their dependencies are still present (Lonsing et al., 2014). For clause addition, learned cubes must be revalidated; for clause deletion, selector-based disabling is sound provided clause dependencies are managed.
The correctness of the approach has been formally proven in various contexts: e.g., in incremental QBF, soundness, completeness, and worst-case amortized complexity are inherited from standard QBF complexity, but with greatly reduced empirical search effort (Lonsing et al., 2014, Egly et al., 2014).
Potential limitations include:
- Heuristic degradation: Retained constraints and learned clauses may degrade solver heuristics in certain rare cases (e.g., search order may be negatively impacted),
- Lack of synergy with aggressive preprocessing, especially in QBF, where required up-front variable instantiation may rule out variable-elimination techniques,
- Memory bloat in full incremental modes when constraints or activation literals accumulate without occasional pruning (Chaudhary et al., 2019).
6. Empirical Impact and Performance
Incremental solving consistently yields substantial reductions in runtime, assignments, and backtracks across a range of domains. In conformant planning benchmarks, incremental QBF solving achieved a 40–50% reduction in solving time and a 50% reduction in backtracks and assignments over non-incremental workflows (Egly et al., 2014). In symbolic execution, aggressive eager infeasibility checks paired with incremental SAT queries enabled Pinaka to outperform several competitors by 4–10× in verification throughput (Chaudhary et al., 2019).
Tables and results from large-scale experiments demonstrate:
| Method | Solved Instances | Avg. Runtime (s) | Avg. Backtracks | Avg. Assignments |
|---|---|---|---|---|
| Non-Incremental | 168 | 24.40 | 2,210 | 501,706 |
| Incremental | 176 | 14.55 | 965 | 120,166 |
[Adapted from Table 1, (Egly et al., 2014)]
Quantitative gains accrue not only in speed, but in scalability, memory consumption, and, in some domains like pattern-mining and high-resolution PDE simulation, in learning higher-quality models from the same data (George et al., 2022, Koçak et al., 2020).
7. Generalizations and Application Domains
The incremental solving approach generalizes to any domain where a sequence of related problem instances can be encoded such that much of the structure remains constant or grows monotonically:
- Planning: Incremental QBF/SAT encoding for plan horizon or stepwise constraint addition.
- Model Checking: Layer-by-layer unrolling in bounded/unbounded model checking, abstraction-refinement (CEGAR) loops (Büning et al., 2018).
- Optimization: Threshold or dominance constraints incrementally refined.
- Symbolic Execution: Incremental path feasibility checks.
- Answer Set Programming: Multi-shot incremental solving for configuration and argumentation.
- Neural/Operator Learning: Incremental curricula in frequency or spatial grid.
The key criterion is the ability to accumulate, modify, or retract constraints while preserving or repurposing solver optimization artifacts, thereby improving efficiency and reducing computational waste (Lonsing et al., 2014, Koçak et al., 2020, George et al., 2022).
References
- "Conformant Planning as a Case Study of Incremental QBF Solving" (Egly et al., 2014)
- "Incremental QBF Solving" (Lonsing et al., 2014)
- "Pinaka: Symbolic Execution meets Incremental Solving" (Chaudhary et al., 2019)
- "Efficient Incremental Modelling and Solving" (Koçak et al., 2020)
- "Incremental Spatial and Spectral Learning of Neural Operators for Solving Large-Scale PDEs" (George et al., 2022)