Papers
Topics
Authors
Recent
Search
2000 character limit reached

Textual Forma Mentis Networks

Updated 19 January 2026
  • TFMNs are cognitive-network models that integrate semantic, syntactic, and emotional associations into labelled graphs, mapping the conceptual landscapes of individuals or populations.
  • They utilize advanced NLP techniques like syntactic parsing, lemmatization, and dependency tree extraction, enriched with emotion lexica for precise affective annotation.
  • TFMNs support diverse applications including creativity prediction, psychopathology assessment, and STEM anxiety analysis by providing reproducible metrics and interpretable network features.

Textual Forma Mentis Networks (TFMNs) are cognitive-network models that combine semantic, syntactic, and emotional associations in language to reconstruct the conceptual landscape—the "forma mentis"—manifested by individuals or populations. Each TFMN is a mathematically defined, labelled graph whose nodes represent lexical concepts and whose edges encode association patterns, either from text corpora or free-association data. Node attributes, including valence and discrete emotion scores, augment traditional semantic networks, yielding a transparent representation capable of supporting psychological, educational, and computational analyses.

1. Formal Definitions and Mathematical Foundations

A Textual Forma Mentis Network is typically constructed as a labelled graph G=(V,E,E)G = (V, E, \mathcal{E}), where:

  • VV is the set of nodes, each corresponding to a word lemma or lexical concept.
  • E⊆V×VE \subseteq V \times V is the set of undirected edges, representing either syntactic dependencies (from parsed text) or semantic associations (from WordNet, free associations, or co-occurrence statistics).
  • E:V→Rk\mathcal{E}: V \rightarrow \mathbb{R}^k is an emotional annotation function, assigning each node a real-valued vector (e.g., z-scores for Plutchik's eight emotions: anger, anticipation, disgust, fear, joy, sadness, surprise, trust).

Syntactic parsing (using spaCy or comparable NLP frameworks) produces dependency trees for sentences. Edges link tokens u,vu, v if the path between them in the dependency tree is of length ≤R\leq R (typical R=3R=3 for short stories, R=4R=4 for longer texts) (Haim et al., 2024, Carrillo et al., 9 May 2025, Passaro et al., 12 Jan 2026). Lemmatization ensures inflected forms are collapsed to canonical nodes. Semantic links (e.g., shared synsets or strong embedding cosine similarity) and stopword exclusion complete the network's structural layer.

The emotional annotation is via lexica such as EmoLex or the Warriner valence norms. Z-scores for each emotion ee are computed as:

ze=fe(d)−μeσez_e = \frac{f_e(d) - \mu_e}{\sigma_e}

where fe(d)f_e(d) is the count of words for emotion ee in document dd, while μe\mu_e, σe\sigma_e are the mean and standard deviation under a random baseline (null model) from the lexicon (Haim et al., 2024, Carrillo et al., 9 May 2025, Passaro et al., 12 Jan 2026).

2. Construction Workflows and Parameter Choices

TFMN construction is a multiphase process:

  1. Text Preprocessing: Sentence splitting and tokenization, followed by filtering for content words (nouns, verbs, adjectives, adverbs, optionally pronouns) and lemmatization (Passaro et al., 12 Jan 2026).
  2. Syntactic Layer Formation: Dependency trees are created per sentence, with edges added between nodes within a defined radius RR in the tree. Aggregation across sentences yields the global network.
  3. Semantic Layer Formation: Optional enrichment with WordNet synonyms, hypernyms, or distributional similarity. Weighted or binary edges may be used depending on inclusion method (Carrillo et al., 9 May 2025, Stella, 2020).
  4. Emotional Labeling: Each node receives valence and emotion scores as per the lexicon. Negations are handled structurally—e.g., "not happy" becomes "unhappy" via antonym lookup, affecting emotion assignment (Haim et al., 2024, Passaro et al., 12 Jan 2026).
  5. Feature Extraction: Network metrics are computed for downstream modelling: degree kik_i, clustering coefficient CiC_i, average shortest-path length (ASPL), PageRank xix_i, modularity QQ, betweenness BC(i)BC(i), k-core decomposition, assortativity, local/global efficiency, and centrality-weighted emotion scores.

Parameter choices—such as RR for dependency path, window sizes for co-occurrence networks, and pronoun inclusion/exclusion—significantly affect network topology and must be standardized for reproducibility (Passaro et al., 12 Jan 2026).

3. Feature Representation and Machine Learning Applications

TFMN features are consolidated into numerical vectors for use in predictive models (classification, regression):

  • Structural metrics: ASPL, diameter, clustering coefficient, average degree, PageRank, modularity, core size, efficiency.
  • Emotional metrics: z-scores for each of Plutchik's emotions, mean valence/arousal, emotion richness.
  • Composite scores: Centrality-weighted emotion (e.g., ZewZ_e^w), neighborhood emotional averages.

Feature vectors serve as input to models such as XGBoost (multi-class or regression settings), Random Forest, or Neural Nets. SHAP values are used for interpretability, quantifying each feature's contribution via the Shapley value formula:

ϕi=∑S⊆F∖{i}∣S∣!(M−∣S∣−1)!M![fS∪{i}(xS∪{i})−fS(xS)]\phi_i = \sum_{S \subseteq F \setminus \{i\}} \frac{|S|! (M-|S|-1)!}{M!} [f_{S \cup \{i\}}(x_{S \cup \{i\}}) - f_S(x_S)]

where FF is the feature set, MM is its cardinality, and fSf_S the model restricted to SS (Haim et al., 2024, Carrillo et al., 9 May 2025).

4. Empirical Findings and Comparative Analyses

TFMNs have demonstrated efficacy across domains:

  • Creativity Prediction: In (Haim et al., 2024, Passaro et al., 12 Jan 2026), TFMN-derived structural features (especially PageRank, degree, ASPL, clustering coefficient) were found to be the strongest predictors of human-rated creativity in stories. Emotion features added value but did not surpass structure for human judgment. GPT-3.5 deviated, weighting emotional richness more heavily when rating its own output, highlighting substantial misalignment with human evaluative heuristics.
  • Psychopathology Assessment: TFMNs built from adolescent interview transcripts predicted latent variables such as social maladjustment, internalizing behavior, and neurodevelopmental risk via regression models. Notably, modularity and core-periphery organization increased with social maladjustment, while betweenness centrality and disgust expressions indexed internalizing symptoms. Neurodevelopmental risk was inversely related to network local efficiency (Carrillo et al., 9 May 2025).
  • Math and STEM Anxiety: In student vs. researcher TFMNs, core STEM concepts (e.g., "mathematics") formed high-degree, negative-valence hubs connected to anxiety-associated concepts in students, whereas researchers displayed dispersed, positively valenced clusters ("creativity," "art"). Emotional contagion analyses quantified the network distance—longer for students—between negative and positive concepts, guiding intervention design (Stella, 2021, Stella, 2020, Stella et al., 2020).
  • Social Media and Public Perception: Large-scale TFMNs built from tweets revealed topic-centrality (via closeness and degree metrics) and sentiment structure, e.g., positive framing of "gender gap" and awareness of stereotype threat around "woman" and "man". Absence of anger and stereotypes was detected around "scientist" nodes online (Stella, 2020).

TFMNs systematically outperform classic co-occurrence networks for short texts and interpretive cognitive tasks; they avoid surface adjacency pitfalls, correctly handle compositional syntax, and are robust to parameter variability (Passaro et al., 12 Jan 2026).

5. Methodological Extensions and Reproducibility

TFMN methodology supports several extensions:

  • Multilayer Networks: Separate semantic, syntactic, and affective layers capture domain-specific influences; multiplex participation coefficients quantify cross-layer integration (Stella, 2020).
  • Dynamic TFMNs: Construction at multiple temporal points enables study of mindset evolution or effects of pedagogical interventions.
  • Continuous Embeddings: In addition to discrete valence, continuous arousal/dominance can be encoded as node attributes, expanding affective resolution (Carrillo et al., 9 May 2025).
  • Reproducibility: Standardization of parser tools (e.g., spaCy), emotion lexicon assignments, and scaling procedures is essential. Complete workflows and code are available on OSF repositories (Passaro et al., 12 Jan 2026).

TFMNs' transparent computational pipeline facilitates direct mapping from raw text to cognitive network, supporting generalizability across domains, including education research, clinical psychology, computational social science, and language-based AI assessments (Haim et al., 2024, Carrillo et al., 9 May 2025, Stella et al., 2020, Stella, 2021, Stella, 2020, Stella, 2020).

6. Interpretation, Limitations, and Implications

TFMNs provide granular insight into both the structural and emotional organization of conceptual frameworks expressed in language. Key interpretive features include:

  • High-degree, negatively valenced hubs as indicators of anxiety or maladaptive framing.
  • Modular community structures highlighting thematic or affective subdomains.
  • Centrality-weighted emotion indices informing on the global or local affective tone of narratives.
  • SHAP-based model explanations which isolate the predictive contribution of network topology versus emotional features.

Limitations include reliance on population-averaged lexica for sentiment labeling, the challenge of modifier handling (negations, intensifiers), and potential network sparsity in very short texts. Moreover, emotion annotation may not fully capture target-group specificity. In AI applications (e.g., creativity assessment with GPT-3.5), TFMNs expose crucial discrepancies between human and machine evaluative heuristics—suggesting caution in the automated rating and generation of creative content (Haim et al., 2024).

A plausible implication is that TFMNs, through their interpretable cognitive architecture, enable both the diagnosis and principled intervention in mindset structures, thereby supporting educational, clinical, and computational objectives. They are uniquely suited for quantifying shifts in concept-emotion couplings pre- and post-intervention, guiding targeted pedagogical, therapeutic, or policy responses (Stella, 2021, Stella, 2020).

Topic to Video (Beta)

No one has generated a video about this topic yet.

Whiteboard

No one has generated a whiteboard explanation for this topic yet.

Follow Topic

Get notified by email when new papers are published related to Textual Forma Mentis Networks (TFMNs).