Incremental Certificate Learning for Hybrid Neural Network Verification . A Solver Architecture for Piecewise-Linear Safety Queries
Abstract: Formal verification of deep neural networks is increasingly required in safety-critical domains, yet exact reasoning over piecewise-linear (PWL) activations such as ReLU suffers from a combinatorial explosion of activation patterns. This paper develops a solver-grade methodology centered on \emph{incremental certificate learning}: we maximize the work performed in a sound linear relaxation (LP propagation, convex-hull constraints, stabilization), and invoke exact PWL reasoning only through a selective \emph{exactness gate} when relaxations become inconclusive. Our architecture maintains a node-based search state together with a reusable global lemma store and a proof log. Learning occurs in two layers: (i) \emph{linear lemmas} (cuts) whose validity is justified by checkable certificates, and (ii) \emph{Boolean conflict clauses} extracted from infeasible guarded cores, enabling DPLL(T)-style pruning across nodes. We present an end-to-end algorithm (ICL-Verifier) and a companion hybrid pipeline (HSRV) combining relaxation pruning, exact checks, and branch-and-bound splitting. We prove soundness, and we state a conditional completeness result under exhaustive splitting for compact domains and PWL operators. Finally, we outline an experimental protocol against standardized benchmarks (VNN-LIB / VNN-COMP) to evaluate pruning effectiveness, learned-lemma reuse, and exact-gate efficiency.
Paper Prompts
Sign up for free to create and run prompts on this paper using GPT-5.
Top Community Prompts
Collections
Sign up for free to add this paper to one or more collections.