Papers
Topics
Authors
Recent
Search
2000 character limit reached

Analogical Concept Memory (ACM)

Updated 21 February 2026
  • Analogical Concept Memory (ACM) is a computational model that stores abstract concepts via analogical reasoning, supporting instant recognition and flexible generalization.
  • It utilizes two primary paradigms—symbolic-relational and neuro-symbolic/hyperdimensional—to enable efficient analogical mapping and structured concept representation.
  • ACM integrates with cognitive architectures to enhance concept formation, real-time retrieval, and prediction, demonstrated in empirical evaluations with reduced retrieval operations and high accuracy.

Analogical Concept Memory (ACM) is a class of computational models and system architectures designed to enable the storage, generalization, and retrieval of concepts through analogical reasoning. ACM systems support the formation and use of abstract concepts by aligning new experiences with previously learned relational structures or geometric prototypes, enabling instant recognition, flexible generalization, and analogical projection in novel situations. Methods are grounded in structured representations—predicate calculus, hyperdimensional vectors, or concept ontologies—with analogical inference serving both recognition and prediction purposes in cognitive agents (Pickett et al., 2013, Mohan et al., 2020, Mohan et al., 2022, Goldowsky et al., 2024).

1. Representational Foundations

Two primary representational paradigms underpin ACM: symbolic-relational and neuro-symbolic/hyperdimensional.

Symbolic-Relational ACM encodes experiences as sets of predicate-calculus facts, representing objects, relations, and events in relational graphs. Key to these models is the extraction of "role–filler" pairs (e.g., r(a1,...,ak)r(a_1, ..., a_k) yielding features r1=a1r_1 = a_1, ...), use of predicates like \texttt{sameAs} for statement embedding, and the formation of windows—small, connected subgraphs of the full relational structure. Each such window is converted into a multiset ("feature bag") of discrete, atomic features suitable for clustering and indexing (Pickett et al., 2013).

Neuro-Symbolic/Hyperdimensional ACM leverages high-dimensional vector representations, often in complex-valued hyperspaces of dimension d104d \sim 10^4 (Goldowsky et al., 2024). In these models, each concept is encoded as a hypervector, capturing graded membership in a conceptual space defined by kk property axes. Concepts are bound to axes using operations such as fractional power encoding and circular convolution, enabling efficient similarity computation and analogical mapping.

2. Construction and Organization of Concept Memory

Chunking and Ontology Formation: In feature-bag systems, after window extraction, ACM learns layered concept ontologies using a chunking algorithm guided by Minimum Description Length (MDL) (Pickett et al., 2013). This process identifies frequently co-occurring feature subsets, scoring candidates by net compression gain:

gain(c)=(c1)freq(c)(logΩ+log(Fmaxc))\mathrm{gain}(c) = (|c| - 1) \mathrm{freq}(c) - (\log|\Omega| + \log{\binom{F_{max}}{|c|}})

Resultant ontologies are directed acyclic graphs (DAGs) supporting multiple inheritance, with nodes defined by their contributing features.

Generalization Contexts: In relational ACMs—particularly those integrated into cognitive architectures like Soar—each learned concept maintains a "generalization context": a pair {\langle\{Examples},\}, Generalizations\rangle, where each generalization is a weighted set of abstracted predicate facts. Fact probabilities are updated incrementally as more examples align analogically, supporting concept refinement (Mohan et al., 2020, Mohan et al., 2022):

p(f)=count(f)Np(f) = \frac{\text{count}(f)}{N}

Hyperdimensional Memory Store: In geometric ACMs, the memory comprises base axes (hypervectors), codebooks mapping property values to vectors, and a database of stored concept hypervectors. Retrieval is mediated by nearest-neighbor search with respect to a cosine or Euclidean similarity metric (Goldowsky et al., 2024).

3. Analogical Retrieval and Inference Mechanisms

Indexing and Parsing: Feature-bag ACMs store inverted indexes, enabling sublinear retrieval by intersecting feature-to-concept lookup lists. Parsing a novel structure involves window extraction, conversion to feature bags, and hierarchical matching through window and schema ontologies (Pickett et al., 2013).

Analogical Matching in Symbolic ACM: Analogical retrieval is realized via Structure-Mapping Engine (SME) algorithms, which align the relational graph of a query with stored generalizations, computing a similarity score normalized by graph size and mapped propositions. If similarity exceeds a match threshold (e.g., τm0.75\tau_m \approx 0.75), analogical inference proceeds by proposing facts from the generalization, instantiated via the correspondence mapping, subject to fact probability thresholding (Mohan et al., 2022). Projection operations, crucial for action concepts, infer expected future states given temporal traces aligned with learned action schemas.

Category-Based and Property-Based Analogies in Hyperspace ACM: Neuro-symbolic ACMs implement two principal analogy types—

  • Category-based: the classic parallelogram operation, A:B::C:X    x^=(ca1)bA:B :: C:X \implies \hat{\mathbf{x}} = (\mathbf{c} \circledast \mathbf{a}^{-1}) \circledast \mathbf{b}
  • Property-based: direct manipulation of property axes via binding/unbinding and application of extrapolated deltas.

Decoded analogical results are mapped back to property vectors via resonator networks, enabling robust recognition of prototype structure or graded category membership (Goldowsky et al., 2024).

4. Integration with Cognitive Architectures

ACM subsystems have been embedded as declarative long-term memory modules in cognitive agent architectures, most notably Soar (Mohan et al., 2020, Mohan et al., 2022).

  • Interfaces: ACM interfaces to core Soar modules for perception, working memory, and action selection. New concepts are registered on-the-fly (via \texttt{create}), and grounded examples are incrementally assimilated (via \texttt{store}). Analogical memory participates both in linguistic comprehension (recognizing references) and in action planning (projecting next states).
  • Learning Dynamics: The architecture is active-learning: examples are only stored when existing knowledge structures fail to resolve queries, and assimilation is gated by analogical similarity thresholds. ACM supports incremental and rapid concept generalization with minimal training instances.

5. Empirical Evaluation and Performance

Efficiency and Scaling: Feature-bag ACM systems demonstrate sublinear retrieval scaling in the number of stored concepts due to DAG-based ontologies and feature-indexing. In narrative retrieval, Spontol achieves a 6.5× reduction in retrieval operations compared to linear baseline MAC/FAC, with only a 4.5% penalty in accuracy (Pickett et al., 2013).

Sample Results: In simulated robotic domains using ACM-integrated Soar, concepts spanning visual, spatial, and action domains are acquired with few examples.

  • Visual concepts: store calls per concept drop from ~2 to 0 after 15 lessons, with both generalization and specificity reaching 100% on evaluation sets.
  • Action concepts: correct action continuation via analogical projection is achieved after just 1–2 demonstrations (Mohan et al., 2020, Mohan et al., 2022).

Geometric ACMs: In toy color domains, category and property-based analogies yield analogical accuracy rates where the nearest prototype is consistently recovered within a similarity threshold (sim > 0.9), typically after only two resonator iterations (Goldowsky et al., 2024).

6. Comparative Methodologies and Theoretical Context

ACM contrasts with traditional predicate-based structure-mapping models by enabling:

  • Spontaneous segmentation and retrieval from large, unsegmented relational domains without manually specified subgraphs (Pickett et al., 2013).
  • Integration of interactive, embodied concept acquisition within closed cognitive systems (Mohan et al., 2022).
  • Continuous and property-based reasoning in high-dimensional semantic spaces that support both symbolic abstraction and metric similarity (Goldowsky et al., 2024).

The chunking and ontology-based approaches provide hierarchical, compressive representations that facilitate human-plausible scalability in analogical inference. Hyperdimensional ACMs extend these capabilities by supporting flexible geometric manipulation in conceptual spaces theory.

7. Limitations and Prospects

Limitations of current ACM models include the lack of built-in forgetting mechanisms, unhandled disjunctive or compositional concepts, and incomplete architectural integration for real-time performance in large-scale settings (Mohan et al., 2020, Mohan et al., 2022). Emerging directions involve combining geometric and structural analogical models, enhancing compression and pruning strategies, and deeper integration into neurocognitive architectures.


Citations:

(Pickett et al., 2013): "Spontaneous Analogy by Piggybacking on a Perceptual System" (Mohan et al., 2022): "Analogical Concept Memory for Architectures Implementing the Common Model of Cognition" (Mohan et al., 2020): "Characterizing an Analogical Concept Memory for Architectures Implementing the Common Model of Cognition" (Goldowsky et al., 2024): "Analogical Reasoning Within a Conceptual Hyperspace"

Topic to Video (Beta)

No one has generated a video about this topic yet.

Whiteboard

No one has generated a whiteboard explanation for this topic yet.

Follow Topic

Get notified by email when new papers are published related to Analogical Concept Memory (ACM).