CoDiNG Opinion Prediction Model
- The CoDiNG model integrates continuous latent opinion vectors with discrete verbalizations using a threshold rule to capture cognitive memory reinforcement.
- It leverages the CogSNet framework to model memory decay and reinforcement, achieving 20-30% F1 score improvements over classic Naming Game models.
- Empirical evaluations show fairness disparities in prediction errors across demographics, highlighting the need for context-aware refinements.
The CoDiNG (Continuous–Discrete Naming Game) model is a hybrid, cognitively inspired framework for simulating and predicting opinion expression in social networks. Developed as an extension of the classic Naming Game, CoDiNG captures both the continuous and discrete facets of opinion states in agents, integrating memory-based reinforcement dynamics from cognitive science. Its primary application is to predict, from temporal social-interaction data, how individuals discretely verbalize their opinions at survey time points, with particular emphasis on outperforming legacy models while revealing subgroup fairness dynamics in prediction accuracy (Nurek et al., 2024, Stępień et al., 7 Jan 2026).
1. Theoretical Motivation and Architecture
CoDiNG is grounded in cognitive-sociological and cognitive-psychological theory. Recognizing that public, verbalized opinions (discrete labels) are simplified manifestations of richer, latent internal attitudes (continuous vectors), the model posits a two-layer representation for each agent. The first, a continuous latent vector , encodes the agent’s private support for two opposed positions (e.g., agree/disagree). The second, a discrete label , records the agent’s outwardly expressed opinion at observational survey times (Nurek et al., 2024).
The temporal and topological structure of social interactions is reconstructed mathematically via the CogSNet model, where edge weights between agents encode memory traces with exponential decay and reinforcement, mirroring empirically observed cognitive mechanisms such as primacy/recency effects (Nurek et al., 2024, Stępień et al., 7 Jan 2026).
2. Model Formalization and Dynamics
2.1 Continuous-Discrete Coupling
Discrete opinion expression is determined by a threshold rule: where is a tunable ambiguity threshold, typically calibrated in the range for stability and alignment with empirical rates of “not sure” responses (Nurek et al., 2024, Stępień et al., 7 Jan 2026).
2.2 Latent State Update—CogSNet Mechanism
Interaction events (messages or contacts) trigger latent state updates. When a speaker interacts with listener at time and expresses position , the recipient's corresponding latent component is reinforced:
where is the reinforcement peak, the exponential decay rate, the removal threshold, and the last update timestamp for the chosen coordinate (Nurek et al., 2024, Stępień et al., 7 Jan 2026).
Agents exhibiting the ambiguous “AB” state select which coordinate to reinforce at random. Edges with decayed weights below are pruned from the network, preventing obsolete or irrelevant connections from influencing current opinion dynamics (Stępień et al., 7 Jan 2026).
2.3 Parameterization and Loss
Model parameters are either fixed by prior cognitive-science studies or tuned by classification loss over observed data, e.g., cross-entropy over ground-truth discrete opinions: (Nurek et al., 2024).
No gradient-based training is used during simulation; instead, CoDiNG is evaluated ex post by comparison of predicted discrete opinions to actual survey answers.
3. Simulation Protocol and Implementation
CoDiNG is implemented as a rule-based, event-driven agent-based simulator:
- Initialization: Agents’ latent vectors are set from initial survey data.
- Network construction: Communication logs are converted to time-stamped, weighted edges using CogSNet.
- Event sequence: For each chronologically ordered interaction, the speaker’s discrete state is computed; the listener’s corresponding latent component is updated as described above.
- Survey prediction: At each survey wave, the model reports each agent's predicted discrete opinion by evaluating the threshold rule on the current latent state (Nurek et al., 2024, Stępień et al., 7 Jan 2026).
This explicit, cognitively motivated mechanism distinguishes CoDiNG from GNN or deep-learning-based architectures, as there are no trainable linear weights or biases inside the model.
4. Empirical Evaluation and Fairness Analysis
CoDiNG has been benchmarked using the NetSense longitudinal dataset, which comprises smartphone communication logs and periodic survey responses among university students (Nurek et al., 2024, Stępień et al., 7 Jan 2026). Performance is measured by macro-averaged score across three opinion classes on six sociopolitical survey questions. Key findings:
- CoDiNG outperforms the classic Naming Game on four out of six questions, achieving gains of in (Nurek et al., 2024).
- Best observed scores correspond to close to the empirical fraction reporting “not sure” () (Nurek et al., 2024).
- Systematic discrepancies in misprediction rates are observed for specific populations. For the “Job Guarantee” question, ethnicity-based minorities had a 72.9% misprediction rate versus a average; for “Equal Rights,” low-parental income minorities suffered a 66.7% error rate (Stępień et al., 7 Jan 2026).
- Intersectional status further aggravates errors: agents with overlapping minority identities saw monotonically rising error rates, from 49.2% () to 75.9% () on “Euthanasia” (Stępień et al., 7 Jan 2026).
Demographic and network-topological features are not inputs to CoDiNG itself. Instead, they are utilized downstream in interpretable classifiers to predict instances in which CoDiNG is likely to err, ultimately facilitating context-aware fairness evaluations.
5. Relation to Other Opinion Prediction Frameworks
CoDiNG occupies a distinct paradigm in opinion forecasting. Traditional Subjective Logic (SL)–based models use belief, disbelief, and uncertainty masses with fusion operators for consensus and discounting, but they lack nuanced agent-specific memory dynamics and do not connect continuous reinforcement to discrete verbalization (Zhao et al., 2019). Deep learning approaches, such as GCN-GRU opinion models, offer scalable and robust handling of temporal and topological heterogeneity but diverge fundamentally by employing gradient-based optimization with embedding layers and explicit loss minimization (Zhao et al., 2019).
By contrast, CoDiNG is purely rule-based, grounded in cognitive trace reinforcement and thresholding. No message-passing neural architecture is involved, and temporal network evolution is dictated by cognitive decay and reinforcement, not by learned weights (Nurek et al., 2024, Stępień et al., 7 Jan 2026).
6. Limitations and Future Directions
CoDiNG’s reliance on fixed cognitive parameters (e.g., as adopted from CogSNet) rather than data-driven optimization is recognized as a limitation. Additionally, it is currently restricted to binary opposition (two opinions). Prospective research avenues include:
- Extending to multi-dimensional or multi-polar latent spaces (allowing three or more alternatives).
- Learning via end-to-end differentiation, e.g., through the cross-entropy loss of predicted verbalizations (Nurek et al., 2024).
- Incorporating heterogeneous decay rates, emotion, or topic specificity in memory traces, potentially further aligning simulations with observed cognitive-behavioral dynamics (Nurek et al., 2024).
A multi-faceted evaluation is essential, particularly as subgroup prediction disparities motivate the integration of fairness-aware assessment protocols, combining individual demographics and network centrality information in post-hoc error prediction (Stępień et al., 7 Jan 2026).
Table 1. Comparison of Opinion Dynamics Approaches
| Approach | Latent/Discrete Structure | Learning Paradigm |
|---|---|---|
| CoDiNG | 2D continuous + thresholded labels; memory reinforcement | Rule-based; preset parameters; no gradient descent |
| Classic Naming Game | Discrete states only ({A, B, AB}), no memory | Rule-based; stepwise adoption/mixing |
| SL/GCN-GRU | Belief/disbelief/uncertainty masses; neural embeddings | Gradient-based ML; end-to-end loss optimization |
The CoDiNG model thus formalizes and quantifies the interplay between latent memory-driven opinion traces and their discretized public manifestations, offering both empirically validated prediction performance and a framework for analyzing fairness in contemporary opinion modeling (Nurek et al., 2024, Stępień et al., 7 Jan 2026).