Interdisciplinary Journal Impact Assessment
- Interdisciplinary journal impact assessment is a framework that quantitatively evaluates how journals integrate diverse research fields using network science, entropy measures, and statistical normalization.
- It employs metrics such as diversity indices, coherence, and intermediation to reveal how journals bridge distinct disciplinary boundaries.
- Practical applications leverage citation networks, entropy calculations, and the I3 indicator to drive equitable research evaluation and foster innovative, cross-disciplinary insights.
Interdisciplinary journal impact assessment refers to the quantitative evaluation of how scholarly journals enable or embody the integration of knowledge across established disciplinary boundaries. This practice is vital for understanding, incentivizing, and managing research innovation, especially where complex scientific and societal challenges require synthesis beyond mono-disciplinary solutions. Modern approaches fuse network science, advanced bibliometrics, and normalization methods to yield robust interdisciplinarity metrics at the journal level, complementing traditional impact indicators.
1. Theoretical Frameworks and Motivations
Interdisciplinary assessment is predicated on the recognition that both knowledge integration and intermediation drive scientific progress. Foundational conceptualizations distinguish:
- Diversity metrics: Capturing variety (number of fields), balance (distributional evenness), and disparity (cognitive distance between fields) in journal outputs and citations, typically formalized as Rao–Stirling diversity where is the fractional presence in category and is their distance.
- Coherence: The degree to which disparate domains are actually bridged in a journal’s citation flows, with lower ratios indicating stronger-than-expected linkage of distant categories.
- Intermediation: The embeddedness of a journal in network positions linking disciplinary cores, operationalized via clustering coefficients or average similarity in journal–journal networks. These theoretical distinctions are formalized in quantitative protocols for robust, field-independent journal comparison (Rafols et al., 2011).
2. Methodological Approaches to Measurement
Central methodologies span citation network analysis, entropy/diversity-based indices, and percentile-normalized impact aggregation.
2.1 Network-based Topic Correlation
In medicine, journal interdisciplinarity is modeled as a weighted undirected graph with nodes as MeSH topics and edge weights , the cosine-normalized co-occurrence of concept pairs. Key metrics include:
- Average node strength : Mean sum of edge weights per topic, proxying knowledge integration.
- Modularity : Assesses compartmentalization; higher signals sharper boundaries.
- Betweenness centrality : Identifies bridging topics, critical for cross-domain connectivity. These network metrics are subject to group-wise comparison (e.g., top-10% SJR “prestigious” vs. other journals) using permutation or bootstrap statistical tests (Du et al., 30 Mar 2025, Du et al., 29 Sep 2025).
2.2 Entropy and Diversity Indices
Silva et al. adopt a citing-side entropy index , with the normalized frequency of subject-category citations received by journal . This metric captures the effective breadth of a journal’s audience and has strong, positive linear correlation with Journal Impact Factor (Pearson , ) and total in-strength (), offering an aggregate gauge of interdisciplinarity (Silva et al., 2012).
2.3 Integrated Impact Indicator (I3)
The I3 framework integrates size and citation quality by assigning weights to percentile ranks of article citation counts within field-year-document type reference sets: where are weights for top percentile bands (e.g., 100 for top 1%, 10 for 1–10%, etc.), and counts of articles per band. The normalized indicator adjusts for differing journal sizes. Percentile assignment within field-corrected document pools confers field normalization, and fractional citation weighting further adjusts for reference-list length biases (Leydesdorff et al., 2011, Dong et al., 5 Jan 2026).
3. Disciplinary Biases and Statistical Normalization
Traditional mean-based bibliometric indicators (JIF, CiteScore) are highly sensitive to citation density differences, introducing systemic bias against low-reference or slow-citing disciplines (e.g., mathematics, humanities). Field-normalized bibliometrics and advanced models offer partial correction:
- Fractional citation weighting: Adjusts for field-dependent citation potential by weighting each incoming or outgoing citation inversely by the citing paper’s number of references.
- Simulation-based normalization: Mixture models estimate expected impact under discipline-specific parameters (review cycle, reference density, citation-age curves); the observed IF is scaled relative to this baseline to yield a normalized impact factor (NIF), supporting equitable cross-disciplinary comparison (Zhou et al., 2017).
- Paper-level normalization: Individual article citation counts are normalized by their specific meso-topic reference sets (e.g., CNCI-CT), outperforming journal-level normalization (JCI) in the presence of interdisciplinary journal content (Liao et al., 30 Mar 2025).
4. Empirical Evidence and Field-Specific Phenomena
Recent large-scale studies in biomedicine and physics demonstrate that highly prestigious journals (top SJR decile) exhibit reduced interdisciplinarity scores (lower network density, node strength) compared to less prestigious venues, with higher modularity reflecting more siloed content. Notably, cancer-related topics act as hubs of interdisciplinarity, disproportionately driving topic-bridging in medical research (Du et al., 30 Mar 2025, Du et al., 29 Sep 2025).
Systematic field-normalized bibliometric assessment (e.g., UK REF posthoc analysis) shows that citation metrics slightly benefit interdisciplinary submissions in some units (notably, Business & Management and Politics), but most effects are modest, context-dependent, and vulnerable to regression-to-the-mean bias due to quantile-scaling (Thelwall et al., 2022). Further, there exists an empirically verified “optimal” range of interdisciplinarity for citing practice—both highly disciplinary and highly cross-field papers tend to see reduced impact, with maxima occurring at intermediate diversity (0908.1776).
5. Practical Implementation Protocols
A generalized protocol for interdisciplinary journal impact assessment entails:
- Selection of reference sets: Choose document universe (field/year/type) and taxonomy (MeSH, ACM CCS, JCR categories).
- Network/data construction: Extract topic, keyword, or subject-category co-occurrence matrices, or article-level citation relations.
- Normalization and metric computation: Apply fractional citation normalization, assign percentile classes or compute entropy/diversity.
- Metric calculation: Derive global metrics (average strength, modularity, entropy, I3), subfield/community aggregates, and visualize networks.
- Statistical significance testing: Use permutation bootstraps, z-tests, and nonparametric classification to compare journal groups or track temporal deviations.
- Diagnostic mapping: Visualize difference networks to detect shifting clusters of interdisciplinary linkage, and map areas of deviation for editorial or funding policy interventions (Du et al., 30 Mar 2025, Rons, 2013, Silva et al., 2012, Dong et al., 5 Jan 2026).
6. Policy Implications and Guiding Principles
Cumulative findings establish that assessment frameworks relying on narrow disciplinary classifications, rigid scalar impact measures, or short citation windows systematically undercount or obscure the cross-domain influence of interdisciplinary journals. Analyst recommendations include:
- Use of plural, conditional indicator portfolios: Combining network-derived, entropy-based, percentile-integrated, and field-normalized citation indicators yields a more robust, classification-independent assessment (Rafols et al., 2011, Dong et al., 5 Jan 2026).
- Editorial and infrastructural adaptation: Integrate submissions, peer review, and reporting practices that credit multidomain research (e.g., cross-field editorial boards, reporting guideline extensions for complex interventions) (Du et al., 29 Sep 2025).
- Algorithmic improvements: Promote AI-driven topic classification to ameliorate the coverage gap in paper-level normalization, especially in interdisciplinary or heterogeneous journals (Liao et al., 30 Mar 2025).
- Diagnostic quadrant analysis: Employ I3 and I3/N scatterplots to distinguish journals with scale-driven vs. quality-driven impact, aiding portfolio management and funding decisions (Dong et al., 5 Jan 2026).
These guidelines collectively foster more equitable, diagnosable, and actionable journal impact assessment, supporting the recognition, funding, and curation of genuinely integrative science.