Papers
Topics
Authors
Recent
Search
2000 character limit reached

Coq Library of Undecidability Proofs

Updated 4 February 2026
  • Coq Library of Undecidability Proofs is a modular repository that mechanizes undecidability results using axiom-free, constructive type theory.
  • It employs explicit, compositional many-one reductions and synthetic methods to formalize decision problems across logic, type theory, and computation models.
  • The library offers reusable proof patterns and structured reduction chains to ensure transparent, certified verifications in mechanized metatheory.

The Coq Library of Undecidability Proofs is a modular, axiom-free repository of mechanized reductions and results in constructive type theory, focusing on the formalization and certification of undecidability theorems for decision problems in logic, type theory, combinatorics, automata theory, and algebra. Developed over multiple contributions and manuscripts, notably as documented in "Mechanized Undecidability of Higher-order beta-Matching" (Dudenhefner, 2 Feb 2026), this library provides not only a catalog of undecidable problems but also uniformly formalized many-one reductions, supporting infrastructure for encoding various models of computation, and reusable tactics and proof patterns. The library is distinguished by an emphasis on synthetic, constructive proof methods, consistently avoiding non-computational principles and extraneous axioms.

1. Scope and Content of the Library

The Coq Library of Undecidability Proofs targets the formalization, in Coq, of reductions and completeness theorems for classic undecidable problems, as well as the undecidability of more recently studied or structurally sophisticated problems in lambda calculus, term rewriting, type inhabitation, first-order logic, and beyond. It includes certified proofs of:

  • Undecidability of higher-order beta-matching, intersection type inhabitation, and lambda-definability via reductions from string rewriting systems (Dudenhefner, 2 Feb 2026).
  • Semi-unification RE-completeness under many-one reductions, via explicit reduction chains from Turing machine halting (Dudenhefner, 2022).
  • The Davis-Putnam-Robinson-Matiyasevich theorem, showing all r.e. sets are Diophantine, and the undecidability of Hilbert’s Tenth Problem (Larchey-Wendling et al., 2020).
  • Trakhtenbrot’s theorem for first-order logic: undecidability of finite satisfiability as soon as the signature contains a binary relation; decidability on monadic signatures (Kirst et al., 2020, Kirst et al., 2021).

The library maintains a rigorous, documented chain of reductions, leveraging a common predicate-reducibility and computability infrastructure. It systematically encodes decision problems as predicates over concrete syntax, builds explicit reduction functions, and mechanizes correctness proofs with fine control over arity, syntax, and finiteness constraints.

2. Methodology and Construction of Reductions

Central to the design is explicit and compositional many-one reduction machinery. Reductions proceed by:

  • Defining formal problem predicates, e.g., Halting, Semi-unification, Higher-order Beta-Matching, Intersection Type Inhabitation, FSAT.
  • Mechanizing the reduction as a computable function ff between problem instances and establishing equivalence P(x)    Q(f(x))P(x) \iff Q(f(x)) at the predicate level.
  • Assembling chains of reductions to propagate undecidability from a synthetic seed problem (e.g., PCP, Turing Machine Halting, Simple Semi-Thue Systems) up to the target.
  • Relying on certified intermediate encodings (e.g., encoding string rewriting as beta-matching (Dudenhefner, 2 Feb 2026), or counter machines as stack machines (Dudenhefner, 2022)) and packaging all steps axiom-freely in Coq.

Proofs routinely employ de Bruijn encodings, explicitly inductive syntax, and type-safe representations (e.g., simply-typed lambda-terms, explicitly-typed models of computation, and finite-type constraints). Tactics such as auto, arithmetic solvers (lia), and custom synthesized hint databases automate common reasoning steps, but all reduction constructions and correctness arguments remain fully transparent and inspectable.

3. Design Patterns: Uniformity and Reuse

A core methodological feature is the uniform construction pattern instantiated across several domains (Dudenhefner, 2 Feb 2026):

  1. Shape-forcing sub-language ("ring"): Syntactic structures that enforce constraints on the "shape" of solutions, e.g., the sub-languages QmQ_m, RmR_m for constraining candidate terms.
  2. Semantic combinators ("G"): Specialized combinators implementing semantic checks or simulating system transitions, such as GabcdG_{ab \Rightarrow cd} and GjiG_j^i to encode rewrite steps.
  3. Final wrapper constructions: Function families or terms that integrate the sub-language and combinators, exposing a single solution criterion linking the source and target problems.

This pattern appears in the reductions to higher-order beta-matching, intersection type inhabitation, and lambda-definability, yielding concise, modular, and reusable mechanization of undecidability via reductions from string rewriting systems.

4. File Structure and Integration

The library is organized into a clear folder and module structure enabling immediate navigation and re-use:

Subsystem (Folder) Primary Purpose Example Files
LambdaCalculus/ λ\lambda-calculus, HO-matching, types HOMatching.v, Util/stlc_facts.v, term_facts.v
Reductions/ Reduction scripts to/from core problems SSTS01_to_HOMbeta.v, HOMatching_undec.v
StringRewriting/ Rewriting systems, semi-Thue problems SSTS.v, SSTS_undec.v
IntersectionTypes/ Intersection type problems, reductions SSTS01_to_CD_INH.v
Synthetic/ Abstract definitions for computability Definitions.v
CounterMachines/, StackMachines/ Simulation of classical MNTMs CM2.v, CM1_HALT_to_SMNdl_UB.v, etc.
SemiUnification/ Semi-unification chain, RE-completeness SimpleSysU.v, Reductions/HaltTM_1_chain_SemiU.v

Library integration is engineered so that importing just a single result (e.g., HOMatching_undec.v) brings into scope the reduction structure, dependencies, and undecidability proof, ready for subsequent reductions or for use as an oracle in higher developments.

5. Synthetic Computability, Decidability, and Enumerability

The development adheres to a “synthetic” approach: all computability, decidability, and enumerability arguments are encoded constructively in Coq, without appealing to external computational models or non-constructive reasoning. Foundational classes and definitions, such as decidable and enumerable, are provided as type classes or explicit predicates, and are transported along reductions.

Notably, synthetic techniques support finiteness and discreteness arguments crucial for formal properties of FSAT and model theory questions, and these are implemented using explicit lists, type-theoretic encodings, and finitary induction principles.

6. Impact and Significance for Mechanized Metatheory

The library establishes best practices for axiom-free, type-theoretic certification of undecidability in proof assistants:

  • Enforces explicit type-safety, ensuring reductions and witnesses are well-formed in all cases.
  • Demonstrates transparency and auditability by documenting every reduction as both Coq code and human-readable pseudocode, with correctness split into independent, modular lemmas.
  • Enables re-use by other formal developments: reductions, tactics, and encoding schemas are generic, parameterized, and exported as modular sub-libraries.

The uniform patterning of reductions (shape-forcing, semantic combinator, final wrapper) and the fine-grained control over inductive constructions (especially for finite and discrete models) provides a template for further undecidability mechanizations, including logic extensions, term rewriting, algebraic specification, and verification meta-theory.

7. Connections to Broader Research and Future Prospects

The mechanized undecidability library directly interfaces with current research in constructive and synthetic computability, synthetic finite model theory, and type theory. Its outcomes have informed the formal foundations of machine-checked metatheorems in logic, automata, and algebraic languages. The continued expansion of the library—incorporating database-theoretic query containments, advanced model theory, and new logic fragments—suggests it will remain central to both foundational research and practical tool construction in formalized mathematics and automated reasoning.

The approach of encoding reductions as compositional, axiom-free, and computationally explicit artifacts invites broader adoption and supports rigorous certification of intractability results across mathematical disciplines (Dudenhefner, 2 Feb 2026, Dudenhefner, 2022, Larchey-Wendling et al., 2020, Kirst et al., 2020, Kirst et al., 2021).

Topic to Video (Beta)

No one has generated a video about this topic yet.

Whiteboard

No one has generated a whiteboard explanation for this topic yet.

Follow Topic

Get notified by email when new papers are published related to Coq Library of Undecidability Proofs.