Papers
Topics
Authors
Recent
Search
2000 character limit reached

Neural Predicates

Updated 19 January 2026
  • Neural predicates are structured relations or classifiers implemented via neural networks, integrating invariant structure discovery with logical composition.
  • They leverage intersective encoding, deep scoring, and neuro-symbolic embeddings to support applications in language, robotics, and database querying.
  • Neural predicates enable variable-role separation, compositional generalization, and declarative inference, advancing predicate learning and neuro-symbolic reasoning.

A neural predicate is a structured relation, property, or classifier in which the semantics of a classical predicate—typically Boolean-valued mappings over tuples of domain objects—are implemented, discovered, or grounded via neural computation. Neural predicates enable latent, invariant structure discovery, symbol/variable–value separation, and logical composition within learned representations, with applications in predicate learning, language, robotics, neuro-symbolic reasoning, and expressive database queries. Their mathematical and architectural realizations span intersective distributed encoding, deep neural scoring for semantic role labeling, learned clustering in argument type spaces, neuro-symbolic relation architectures, and declarative logic programming semantics.

1. Defining Neural Predicates: Form and Semantics

Neural predicates generalize symbolic predicates from logic by implementing relation membership, truth-conditionality, or property attribution using neural networks or distributed neural codes.

  • In predicate learning systems, a neural predicate pp is not hand-specified, but discovered as the intersection p=k=1nx(k)p=\bigwedge_{k=1}^n x^{(k)} of distributed codes x(k)x^{(k)}, representing the invariants common to different exemplars (Martin et al., 2018).
  • In neuro-symbolic logic programming, a neural predicate q(t,u)q(\vec t,u) is an atom whose probabilistic truth assignment is computed by a learned network mqm_q, e.g., P(q(t)=ui)=softmaxi(mq(ϕ(t)))P(q(\vec t)=u_i)=\operatorname{softmax}_i(m_q(\phi(\vec t))) (Hinnerichs et al., 2024).
  • In neuro-symbolic world abstraction for robotic planning, a neuro-symbolic predicate (NSP) ψ\psi of arity mm is a mapping ψ:Om(XB)\psi:\mathcal O^m\to(\mathcal X\to\mathbb B): for object tuple (o1,,om)(o_1,\dots,o_m), ψ(o1,,om)(x)=1\psi(o_1,\dots,o_m)(x)=1 iff the property holds in state xx (Liang et al., 2024).
  • In neural semantic role labeling models, predicates are typically encoded as token-local or span-level contextual vectors in neural sequence models, parameterized as unary or pairwise scoring functions (He et al., 2018).
  • In deep predicate invention and learning for planning, predicates are instantiated both as symbolic names and as neural classifiers θψ(x,(o1,,ou))\theta_\psi(x,(o_1,\dots,o_u)) producing Boolean (or probabilistic) truth values over continuous input states (Wang et al., 19 Dec 2025).

Neural predicates can thus be generative, discriminative, or declaratively specified mappings bridging input representations and logical arity, supporting both recognition (classification/grounding) and synthesis (inversion/generation) (Hinnerichs et al., 2024).

2. Mathematical Formulations and Learning Paradigms

The formulation of neural predicates depends on their functional role and learning framework.

  • Predicate extraction by intersection: Neural predicates are learned via intersective comparison, e.g., pi=minkxi(k)p_i=\min_k x^{(k)}_i, rendering pp as the common pattern across samples. The minimal “co-activation energy” objective is

E(p)=kpx(k)+λp1E(p)=-\sum_k p\cdot x^{(k)}+\lambda \|p\|_1

minimized by p=argminp[0,1]dE(p)p^*=\arg\min_{p\in[0,1]^d}E(p) (Martin et al., 2018).

  • Neural-SRL scoring: Structured prediction assigns

φ(p,a,l)=Φa(a)+Φp(p)+Φrel(l)(a,p)\varphi(p,a,l) = \Phi_a(a) + \Phi_p(p) + \Phi_\mathrm{rel}^{(l)}(a,p)

with Φa\Phi_a, Φp\Phi_p, and Φrel(l)\Phi_\mathrm{rel}^{(l)} realized as MLPs over span embeddings g(a),g(p)g(a),g(p) (He et al., 2018).

  • Deep predicate embedding and clustering: Typed-verb embeddings in knowledge graphs use TransE-style translation losses:

L=(ns,vt,no)S(ns,vt,no)S[γ+d(ns+vt,no)d(ns+vt,no)]+\mathcal L = \sum_{(n_s,v_t,n_o)\in S}\sum_{(n'_s,v_t,n'_o)\in S'}\left[\,\gamma+d(\mathbf n_s+\mathbf v_t,\mathbf n_o)-d(\mathbf n'_s+\mathbf v_t,\mathbf n'_o)\right]_+

(Sedoc et al., 2017).

  • Neuro-symbolic planning predicates: Classifiers θψ\theta_\psi are trained over observed transition tuples using effect-based supervision and composite loss functions combining Jensen–Shannon divergence on unchanged groundings and binary cross-entropy on effected groundings (Wang et al., 19 Dec 2025).
  • Declarative neural predicates: Prototypes in latent space represent class-conditional neural relations; similarity losses (e.g., sim(k,pyk)\operatorname{sim}(\ell_k,p_{y_k})) and optionally reconstruction losses (decoder DD) are trained jointly (Hinnerichs et al., 2024).

Learning is typically unsupervised (as intersection pattern discovery or effect-driven predicate learning), weakly supervised (annotation or action-induced labels), or bilevel (top-down symbolic proposal with bottom-up neural validation). Neural predicates can also be invented through anti-unification in graph-based continuous space, supporting bias and scope constraints (Mota et al., 2019).

3. Architectures Realizing Neural Predicates

Designs vary from distributed activation ensembles to explicit classifiers and symbolic-neural hybrids.

  • Predicate learning architectures employ layered “banks” of units with lateral inhibition and assembly oscillation for dynamic binding. Hebbian updates govern mapping and comparison; oscillatory phase lags encode variable binding (Martin et al., 2018).
  • Semantic role labeling leverages span-based representations, high-dimensional token/context encodings (BiLSTM/ELMo), and MLP scoring for predicate-argument-dependency induction (He et al., 2018).
  • Knowledge-graph embeddings learn per-typed-relation vectors and perform clustering over these, integrating entity type information via external resources (NELL categories) (Sedoc et al., 2017).
  • Neural Multi-Space (NeMuS) graphs represent atoms as weighted T-node collections and enable clause construction and recursive hypothesis invention through continuous embedding and inductive momentum (Mota et al., 2019).
  • Declarative neuro-symbolic formulations implement predicates as bidirectional relations with encoding (classification) and decoding (generation) branches—training an encoder E, prototype bank {pi}\{p_i\}, and a decoder D for latent-to-observation mapping (Hinnerichs et al., 2024).
  • NSP-enabled robotic planning composes neural vision backends with symbolic logic operators to produce first-order world abstractions; primitive predicates probe frozen VLMs, derived ones use symbolic code (Liang et al., 2024).
  • Bilevel predicate invention (UniPred) alternates LLM guided predicate proposal (and operator effect pattern generation) with neural classifier training from real transitions, passing feedback to the symbolic module (Wang et al., 19 Dec 2025).

4. Properties, Expressiveness, and Logical Operations

Neural predicates support core logical and cognitive properties:

  • Variable-role separation: Neurally learned predicates are “stand-alone” and can be bound or re-bound dynamically via non-synchronous activation (phase offsets), enabling variable–value independence (Martin et al., 2018, Wang et al., 2016).
  • Compositionality: Predicates and arguments, as separate vectors, can be composed by vector addition or tensor products, supporting arbitrarily deep and recursive structures; composition mirrors predicate calculus at the representational level (Martin et al., 2018).
  • Binding and unbinding: Neural mechanisms implement binding as phase-locked activity and enable unbinding by temporal separation or analytic subtraction (Martin et al., 2018, Wang et al., 2016).
  • Interpretable abstraction: NSPs and related constructs enforce interpretable, compositional structure over perception, facilitating transparent high-level reasoning, goal grounding, and generalization (Liang et al., 2024, Wang et al., 19 Dec 2025).
  • Declarativeness and invertibility: Newer frameworks guarantee relational declarativeness, empowering a single neural predicate to handle recognition and generation queries—enabling flexible logic programming and multi-directional reasoning (Hinnerichs et al., 2024).

5. Applications Across Language, Robotics, and Data Systems

Neural predicates underpin a wide spectrum of structural learning and reasoning tasks:

  • Neural semantic role labeling: Jointly predicts predicates, argument spans, and their relations in end-to-end models, supporting state-of-the-art labeling from raw input (He et al., 2018).
  • Verb sense induction and text understanding: Clusters verb-argument types in social-media text for improved sentiment, sarcasm, and locus-of-control prediction, outperforming hand-built verb classes (Sedoc et al., 2017).
  • Robotic perception and planning: Initiate state abstraction by learning world predicates directly from visual data, demonstrations, or action semantics, with application in instruction following, goal inference, and closed-loop planning (Migimatsu et al., 2021, Sharma et al., 2022, Liang et al., 2024, Wang et al., 19 Dec 2025).
  • Approximate query processing: Enable querying of unstructured data (video, images, text) in databases via neural predicates as runtime-evaluated Boolean filters; enable statistically bounded estimation in sampling-based systems (Kang et al., 2021).
  • Neuro-symbolic logic programming: Encode logic rules with neural predicate atoms whose truth-values or probability distributions are computed or sampled by neural networks, supporting end-to-end differentiable learning and logical inference (Hinnerichs et al., 2024).
  • Emergent structure in neural comprehension: Unsupervised discovery of predicate–argument decompositions in LLMs, with hidden states encoding formulae Φ[c]\Phi[c] and modular separation between semantic and referential subspaces (Wang et al., 2016).

6. Empirical Evaluation and Theoretical Implications

Empirical studies and theoretical analyses demonstrate several key advantages and limitations:

  • Compositional generalization: Predicate learning via intersection and oscillatory binding enables strong zero-shot transfer—e.g., rapid adaptation from Breakout to Pong using the same learned predicate structure (Martin et al., 2018).
  • Sample efficiency and OOD generalization: NSPs and bilevel-invented neural predicates in robotic planning deliver high solve rates with fewer demonstrations and strong generalization to unseen configurations (Liang et al., 2024, Wang et al., 19 Dec 2025).
  • Improved downstream prediction: Verb predicate clustering outperforms traditional clustering baselines by 2–6 F1 points in noisy text classification tasks (Sedoc et al., 2017).
  • Scalability in symbolic induction: Continuous NeMuS graphs achieve efficient predicate invention, growing linearly (not exponentially) in candidate space with predicate dimensionality (Mota et al., 2019).
  • Declarative inference power: Declarative neural predicates enable answering arbitrary generative or recognition queries without retraining, achieving near‐parity in discriminative performance with non-declarative baselines (Hinnerichs et al., 2024).
  • Bridging symbolic–neural dichotomy: Predicate learning and neuro-symbolic models achieve an overview of structure learning and generalization, matching oracle or symbolic planners in compositional reasoning while leveraging deep representations (Martin et al., 2018, Wang et al., 19 Dec 2025).

Limitations noted include the need for effect-based grounding assumptions (as in STRIPS), reliance on frozen vision backends in some approaches, and the challenge of extending to non-deterministic controllers or environments (Wang et al., 19 Dec 2025).

7. Connections to Other Formalisms and Future Directions

Neural predicates represent a unifying formalism across machine learning, cognitive modeling, and symbolic AI, with diverse instantiations:

  • Predicate learning vs. symbolic logic: Predicate learning recapitulates formal logic’s binding and variable separation within unsupervised neural systems, addressing earlier objections to aliasing and binding in distributed computation (Martin et al., 2018).
  • Neural-SRL as structured conditional models: The neural predicate framework is deeply connected to structured prediction and semantic parsing, extending traditional factor graph or conditional random field (CRF) approaches with span-based neural features (He et al., 2018).
  • Relation to program synthesis: Predicate learning serves as an alternative to probabilistic program induction—eschewing heavy supervision or complete ontological priors while achieving comparable extrapolative capacity (Martin et al., 2018).
  • Neuro-symbolic integration: Recent work formalizes the relational, declarative behavior of neural predicates through bidirectional encoding/decoding, prototype-based latent spaces, and logic programming semantics (e.g., DeepProblog, DeclDeepProblog) (Hinnerichs et al., 2024).
  • Query execution and data systems: Neural predicates are foundational building blocks for hybrid database systems that operate over unstructured media and enable fast, approximate, or statistical query answering at large scale (Kang et al., 2021).

Ongoing research aims to extend neural predicate frameworks to stochastic and partially observable domains, automate skill and predicate joint discovery, integrate deep perception with online logical inference, and scale up neuro-symbolic learning in high-dimensional and interactive environments (Wang et al., 19 Dec 2025).

Whiteboard

No one has generated a whiteboard explanation for this topic yet.

Follow Topic

Get notified by email when new papers are published related to Neural Predicates.