Papers
Topics
Authors
Recent
Search
2000 character limit reached

Abductive Meta-Interpretive Learning

Updated 13 February 2026
  • Abductive Meta-Interpretive Learning is a neuro-symbolic framework that combines abduction, induction, and parameter optimization to derive symbolic logic programs from raw data.
  • It employs an iterative cycle of sub-symbolic perception, abduction, and meta-interpretive induction to enable predicate invention and efficient program synthesis.
  • Applied to tasks such as arithmetic induction, sorting, and synthetic biology, it demonstrates high accuracy and data efficiency compared to traditional models.

Abductive Meta-Interpretive Learning (Meta₍Abd₎) is a neuro-symbolic learning framework that unifies abduction, induction, and parameter optimization to jointly learn sub-symbolic perception models, induce symbolic first-order logic programs, and explain raw data in terms of latent symbolic facts. By integrating abduction with meta-interpretive learning, Meta₍Abd₎ addresses challenges fundamental to neuro-symbolic reasoning—including data-efficient program induction, predicate invention, and learning from raw, perceptual inputs without a pre-existing symbolic knowledge base—within a unified probabilistic or cost-based inference paradigm (Dai et al., 2020, Dai et al., 2021).

1. Foundational Principles and Formal Structure

Meta₍Abd₎ extends the framework of Inductive Logic Programming (ILP) by embedding abduction to infer plausible symbolic groundings from raw data, and by employing meta-interpretive induction to induce first-order logic programs, potentially involving predicate invention and recursion. The model supports joint optimization of symbolic structure and real-valued parameters associated with perception modules or numerical theory components.

The fundamental entities in Meta₍Abd₎ are:

  • Logical Languages:
    • LbL^b: Background language for non-abducible predicates.
    • LaL^a: Abducible language for atoms hypothesized to explain observations.
    • LhL^h: Target language of predicates to be defined.
  • Components:
    • Background Knowledge (BK): Finite set of Horn clauses, numerical modules (e.g., neural networks or ODE solvers), and meta-rule templates.
    • Abducible Set (A): Subset of LaL^a, ground atoms abduced to explain data.
    • Hypothesis Clauses (H): Induced logic program, subset of LhLbL^h \cup L^b, constructed from meta-rule instantiations.
    • Parameters (θ\theta): Real-valued, found throughout neural modules or symbolic theory.
  • Objective:

(H,A,θ)=argminH,A,θ[L(EH,A,θ)+R(H,A)](H^*, A^*, \theta^*) = \arg \min_{H,A,\theta} \Bigl[ \mathcal{L}(E \mid H, A, \theta) + \mathcal{R}(H, A) \Bigr]

where L\mathcal{L} is a data-fit loss, typically mean squared error for regression,

L(EH,A,θ):=iyifH,A,θ(xi)2,\mathcal{L}(E \mid H, A, \theta) := \sum_{i} \left\| y_i - f_{H,A,\theta}(x_i) \right\|^2,

and R(H,A)=αH+βA\mathcal{R}(H,A) = \alpha |H| + \beta |A| penalizes model complexity (Dai et al., 2021).

2. Architectural and Algorithmic Workflow

The joint learning process in Meta₍Abd₎ follows an Expectation-Maximization-style loop that alternately performs abduction, induction, and parameter fitting:

  1. Sub-symbolic Perception: A neural network or numerical module ϕθ\phi_\theta maps each raw input xx to a soft assignment Pθ(zx)P_\theta(z|x) over symbolic pseudo-labels zz.
  2. Abduction:

For each datum, abduction hypothesizes a minimal set zz of ground atoms such that, together with BKBK and current HH, the observed output yy is entailed: BKHzyBK \cup H \cup z \vDash y. Abducibles commonly include latent label assignments, constraints, or unknown mechanistic constants.

  1. Meta-Interpretive Induction: The meta-interpreter grows HH by instantiating meta-rules from MM, possibly inventing new predicates or clause structures. Mode declarations provide syntactic bias and literal constraints.
  2. Parameter Optimization: θ\theta is updated (e.g., via gradient descent) by maximizing the likelihood or minimizing loss on the inferred pseudo-labels zz^*, keeping HH and AA fixed.
  3. Iteration: Steps repeat until convergence of the loss or complexity-regularized objective. The overall score comprises both data fit and model complexity (Dai et al., 2020, Dai et al., 2021).

The following table summarizes the main steps in the learning loop:

Step Description Output
Sub-symbolic Map raw xx to Pθ(zx)P_\theta(z|x) Probabilistic facts
Abduction Find zz s.t. BKHzyBK \cup H \cup z \vDash y Explanatory groundings
Induction Induce HH from BKABK \cup A using meta-rules Logic program clauses
Parameter fit Optimize θ\theta for best data fit given zz Updated parameters

3. Meta-Interpretive Induction and Predicate Invention

Meta₍Abd₎ leverages meta-interpretive learning (MIL) to induce logic programs by unfolding second-order meta-rule templates such as

P(A,B)Q(A,C),P(C,B)P(A, B) \leftarrow Q(A, C), P(C, B)

for recursive definitions. Instantiations are guided by example coverage and regularized to promote concise, reusable theories. Clause synthesis is constrained by mode declarations specifying argument types and allowed predicate shapes.

Predicate invention is enabled by meta-rules that allow the introduction of auxiliary predicates not present in BKBK. This permits the discovery of highly abstracted or recursively defined logic subprograms, enhancing both expressivity and extrapolation capability (Dai et al., 2020).

4. Abduction from Raw and Noisy Data

Abduction in Meta₍Abd₎ serves to bridge the gap from raw or noisy observations to the symbolic level. The abduction step hypothesizes a minimal set of ground atoms (e.g., categorical labels, constraints, unknown intermediates) such that—conditional on current BKBK and HH—the observations are entailed with high probability. In practical implementations, abduction is performed via greedy or branch-and-bound search with probabilistic pruning, and, when applicable, arithmetic constraints are solved by CLP(Z) or similar solvers.

In the biodesign domain, abduction hypothesizes mechanistic constants (e.g., reaction rates) that explain time-series protein concentrations. In vision tasks, abduction selects maximal-probability label groundings needed to explain sequence-level targets, such as arithmetic operations over perceived digits (Dai et al., 2021, Dai et al., 2020).

5. Knowledge Representation and Reusability

All learned programs and background modules are represented in first-order logic, with strict separation between background, induced, and abducible predicates. Numerical submodules, such as ODE solvers, are integrated via predicate interfaces, permitting joint learning of mechanistic and empirical models.

A crucial feature of Meta₍Abd₎ is knowledge reuse. Induced logic programs—such as recursive sum/product or sorting predicates—can be incorporated as background in subsequent tasks, enabling transfer learning and incremental theory construction. Invented predicates are treated as native in future induction, further compounding reusability (Dai et al., 2020).

6. Complexity, Optimization, and Empirical Properties

The dominant complexity in Meta₍Abd₎ arises from the combinatorial explosion of meta-rule instantiations and the search over abducible groundings. The worst-case search is exponential; however, empirical performance in both vision and synthetic biology applications demonstrates tractable runtimes attributed to efficient pruning strategies (e.g., greedy or A*-like branch-and-bound), data-efficient induction, and disciplined meta-rule templates.

In biodesign applications, typical convergence is achieved in 10–15 major iterations, with per-iteration induction involving 10310^310410^4 clause expansion steps. For medium-scale datasets (e.g., 50 time-series), end-to-end learning completes in 1–2 hours on prototypical hardware (Dai et al., 2021).

7. Practical Applications and Experimental Results

Meta₍Abd₎ has demonstrated significant capability in diverse problem domains:

  • Arithmetic Induction from MNIST: Achieved 95–98% classification accuracy and 0.5{\sim}0.5 MAE on cumulative sum/product tasks, with strong extrapolation to sequences an order of magnitude longer than seen during training. End-to-end RNN or LSTM models failed to generalize (<<10% accuracy), while DeepProbLog was computationally intractable in these settings (Dai et al., 2020).
  • Sorting Task: Induced an invented "sorted" predicate in a hierarchical fashion, enabling permutation prediction on image sequences at >>91% exact accuracy for length-5 inputs and 87%\approx87\% for length-7, outperforming NeuralSort and Neural Logical Machines (Dai et al., 2020).
  • Synthetic Biology (Three-Gene Operon): Symbolically recovered mechanistic ODE structures, abduced reaction-rate constants within 5% of ground-truth, and minimized experimental cost by integrating active learning. Only 20 designed experiments sufficed for complete structural and parameter recovery versus a combinatorial base of 54 (Dai et al., 2021).

The following are representative induced program fragments:

1
2
3
4
5
f([H], H).
f([X, Y|T], Z) :- add(X, Y, N), f([N|T], Z).   (cumulative sum)

s([A,B])   :- nn_pred(A,B).
s([A,B|T]) :- nn_pred(A,B), s([B|T]).         (invented sorted predicate)
(Dai et al., 2020)

References

Definition Search Book Streamline Icon: https://streamlinehq.com
References (2)

Topic to Video (Beta)

No one has generated a video about this topic yet.

Whiteboard

No one has generated a whiteboard explanation for this topic yet.

Follow Topic

Get notified by email when new papers are published related to Abductive Meta-Interpretive Learning.