Papers
Topics
Authors
Recent
Search
2000 character limit reached

4th-Order Adjacency Tensor

Updated 16 February 2026
  • 4th-Order Adjacency Tensor is a symmetric representation that captures complex 4-adic interactions in hypergraphs and hb-graphs.
  • It utilizes null vertices and normalization techniques to standardize non-uniform edge sizes and incorporate vertex multiplicities.
  • The tensor underpins advanced spectral analysis and scalable algorithms using methods like tensor-times-same-vector for efficient computation.

A 4th-order adjacency tensor is a symmetric tensorial representation that encodes the multiway structure of 4-adic interactions in hypergraphs, non-uniform hypergraphs, and generalized objects such as hyperbag-graphs (hb-graphs, i.e., hypergraphs with multiset edges). The 4th-order adjacency tensor construction is central for advancing spectral theory and higher-order network analysis, supporting both theoretical and computational approaches for systems with complex multiway relationships.

1. Notions of Adjacency in Higher-Order Structures

Adjacency in classical graphs is a binary relationship, naturally represented by a 2nd-order matrix. For hypergraphs—where edges can join any subset of the vertex set—adjacency must generalize to higher-order relations:

  • k-adjacency: A tuple of kk (not necessarily distinct) vertices is k-adjacent if it is contained in a hyperedge of size at least kk.
  • e-adjacency: For each edge (hyperedge) ee, all vertices within ee are e-adjacent; this captures the simultaneous co-membership structure central to hypergraph data.

For non-uniform hypergraphs, where edge sizes vary, pairwise matrices fail to capture the intrinsic higher-order connectivity; an order-rr tensor, with rr being the maximal hyperedge cardinality, is required for a lossless representation (Ouvrard et al., 2017, Ouvrard et al., 2018).

2. Construction of the 4th-Order Adjacency Tensor

Explicit constructions vary according to the type of structure—simple hypergraph, non-uniform hypergraph, or hb-graph. The construction outlined below follows the most general approach based on (Ouvrard et al., 2018), with cross-references to (Aksoy et al., 2023, Ouvrard et al., 2017), and (Ouvrard et al., 2018).

a) General Definition for hb-graphs (Natural Multisets)

Let H=(V,E)H = (V, E) with V={v1,,vn}V = \{v_1, \dots, v_n\} and each eEe \in E a multiset, i.e., me:VNm_e : V \rightarrow \mathbb{N} records vertex multiplicities. Set k=rm(H)=maxeE#mek = r_m(H) = \max_{e \in E} \#_m e, the maximum multiset cardinality of any edge.

  • For each ee of size s=#meks = \#_m e \leq k, introduce a "null" vertex NsN_s of multiplicity ksk-s.
  • The extended multiset e^\hat{e} consists of the elements of ee, plus ksk-s copies of NsN_s.
  • The tensor A=(Ai1...ik)A = (A_{i_1...i_k}) has dimension n+(k1)n + (k-1), symmetrized in all indices:

Ai1ik=eEceΔi1ik(e)A_{i_1\cdots i_k} = \sum_{e\in E} c_e\,\Delta^{(e)}_{i_1\cdots i_k}

where ce=k/sc_e = k/s, and Δ(e)\Delta^{(e)} is nonzero only when the indices match the extended multiset e^\hat{e}:

Δi1...ik(e)={veme(v)!(ks)!(k1)!,if the multiset of (i1,...,ik) matches e^ 0,otherwise.\Delta^{(e)}_{i_1...i_k} = \begin{cases} \dfrac{\prod_{v \in e^*} m_e(v)!\,(k-s)!}{(k-1)!}, & \text{if the multiset of } (i_1,...,i_k) \text{ matches } \hat{e} \ 0, & \text{otherwise.} \end{cases}

b) Specialization to k=4k=4 (Order 4)

Set k=4k=4; introduce null vertices N1N_1, N2N_2, N3N_3. For ee of size ss:

  • ce=4/sc_e = 4/s
  • me(Ns)=4sm_e(N_s) = 4-s
  • Nonzero values:

Ai1i2i3i4=eE4sveme(v)!  (4s)!6A_{i_1 i_2 i_3 i_4} = \sum_{e \in E}\frac{4}{s}\frac{\prod_{v \in e^*} m_e(v)!\;(4-s)!}{6}

if the index multiset is given by the multiplicities me(vj)m_e(v_j) for j=1..nj=1..n and (4s)(4-s) entries equal to n+sn+s (location of NsN_s) (Ouvrard et al., 2018).

An entry is nonzero if and only if, for some ee of size ss, me(vj)m_e(v_j) of the indices equal jj for 1jn1\leq j\leq n and $4-s$ of the indices equal n+sn+s.

c) Uniform Hypergraphs

If the hypergraph is 4-uniform (every edge has size exactly 4), no null vertices are needed, and the tensor reduces to ARn×n×n×nA \in \mathbb{R}^{n \times n \times n \times n}:

Ai1i2i3i4={1/6if {i1,i2,i3,i4}E 0otherwiseA_{i_1 i_2 i_3 i_4} = \begin{cases} 1/6 &\text{if } \{i_1,i_2,i_3,i_4\}\in E \ 0 &\text{otherwise} \end{cases}

with full permutation symmetry in all indices (Pearson et al., 2012).

3. Key Properties

  • Permutation symmetry: Ai1i2i3i4A_{i_1 i_2 i_3 i_4} is invariant under any permutation of indices.
  • Degree normalization: The m-degree of vertex ii, denoted degm(vi)\deg_m(v_i) (i.e., the sum of its multiplicities), is recovered by summing Aij2j3j4A_{i\, j_2\, j_3\, j_4} over the last three indices:

j2,j3,j4Aij2j3j4=degm(vi)\sum_{j_2,j_3,j_4} A_{i\, j_2\, j_3\, j_4} = \deg_m(v_i)

  • Handling duplicates: Multiplicities me(v)m_e(v) in multiset edges contribute factorial weights, such as me(v)!m_e(v)! in the numerator.
  • Edge encoding: The specific "null" vertex associated to edge size ss ensures that edges of different cardinalities are uniquely identified in the tensor structure (Ouvrard et al., 2018, Ouvrard et al., 2017, Ouvrard et al., 2018).

4. Concrete Examples

Given V={a,b,c}V=\{a,b,c\} (n=3n=3) and edges: e1={a2,b1}e_1=\{a^2, b^1\} (s=3s=3), e2={b1,c2}e_2=\{b^1, c^2\} (s=3s=3), e3={a1,b2,c1}e_3=\{a^1, b^2, c^1\} (s=4s=4). The tensor AA (of dimension 6) includes, for instance:

  • For e1e_1: ce1=4/3c_{e_1}=4/3, me1(N3)=1m_{e_1}(N_3)=1. Each of the 12 tuples matching two aa, one bb, and one N3N_3 is assigned AA-value $4/9$.
  • For e3e_3 (s=4s=4): pattern {a,b,b,c}\{a, b, b, c\}, with each tuple receiving AA-value $1/3$.

For V={1,2,3,4}V=\{1,2,3,4\} and E={{1,2,3,4}}E=\{\{1,2,3,4\}\}: Each permutation of (1,2,3,4)(1,2,3,4) indexes yields Ai1i2i3i4=1/6A_{i_1 i_2 i_3 i_4}=1/6; all other entries vanish.

c) General Hypergraph via Polynomial Homogeneization

The construction via homogeneous polynomials or the hypergraph uniformisation process leads to a tensor of dimension n+3n+3 (for order 4), assigning $1/6$ to each valid pattern obtained by augmenting an edge of size jj with $4-j$ auxiliary indices; all other entries are zero (Ouvrard et al., 2017, Ouvrard et al., 2018).

5. Spectral Theory

The spectral theory of the 4th-order adjacency tensor underpins higher-order generalizations of eigenvalues and centralities:

  • H-eigenvalues:

(Ax3)i=λxi3,xRn{0}(A x^3)_i = \lambda x_i^3, \quad x\in\mathbb{R}^n\setminus\{0\}

The set of real H-eigenvalues is finite; if the hypergraph is connected, the spectral radius ρH(A)\rho_H(A) is attained at a unique positive eigenvector (up to scaling) (Pearson et al., 2012).

  • Z-eigenvalues:

Ax3=μx,x2=1A x^3 = \mu x, \quad \|x\|_2=1

The largest Z-eigenvalue ρZ(A)>0\rho_Z(A)>0 has an associated nonnegative unit-norm eigenvector; strict positivity is guaranteed under additional "nice" connectivity (Pearson et al., 2012).

  • E-eigenvalues:

Ax3=νx,x12+...+xn2=1A x^3 = \nu x, \quad x_1^2+...+x_n^2 = 1

For 4-partite 4-graphs, the E-spectrum is symmetric about zero.

A Gershgorin-type bound applies for AA symmetric and degree-normalized:

λmax(Δ,Δ)+4|\lambda| \leq \max(\Delta, \Delta^*) + 4

where Δ\Delta is the maximum vertex m-degree, and Δ\Delta^* is the maximum m-degree over null indices (Ouvrard et al., 2018). In the 4-uniform case, this reduces to λΔ|\lambda| \leq \Delta (Ouvrard et al., 2017).

6. Computational and Algorithmic Aspects

Direct formation of the n4n^4-size adjacency tensor is typically avoided in practice for large nn. Recent developments, e.g., tensor-times-same-vector (TTSV) methods, achieve efficient computation (O(eEe)O(\sum_{e \in E}|e|) time), central for centrality and clustering algorithms:

  • For vector xRnx\in\mathbb{R}^n, y=A×2x×3x×4xy = \mathcal{A} \times_2 x \times_3 x \times_4 x is computed without explicit tensor storage (Aksoy et al., 2023).
  • The tensor is formally n4n^4-dimensional but only nonzero on patterns prescribed by the edge structure and the normalization scheme.

TTSV algorithms enable the use of 4th-order adjacency tensors in scalable hypergraph data analysis, extracting higher-order structure inaccessible to matrix-based approaches (Aksoy et al., 2023).

7. Connections and Generalizations

The 4th-order adjacency tensor is encapsulated by multiple frameworks:

  • Hypergraph uniformization and polynomial homogeneization (Ouvrard et al., 2017, Ouvrard et al., 2018): Orders non-uniform edges by introducing auxiliary vertices, producing a homogeneous symmetric tensor encoding all hyperedge layers in a single structure.
  • hb-graphs and multisets (Ouvrard et al., 2018): Accommodates multiedges with multiplicity, requiring careful normalization and null-vertex bookkeeping.
  • Stirling number–based weighting (Aksoy et al., 2023): Equidistributes edge-weight over all index patterns representing full coverage of edge vertices, crucial for algorithms exploiting multilinear spectral theory.

All approaches converge on the core requirement: a fully symmetric, degree-normalized, combinatorially faithful encoding of 4-adic adjacency, serving as a canonical representation for higher-order spectral, algebraic, and algorithmic analyses in hypergraph-based models.

Topic to Video (Beta)

No one has generated a video about this topic yet.

Whiteboard

No one has generated a whiteboard explanation for this topic yet.

Follow Topic

Get notified by email when new papers are published related to 4th-Order Adjacency Tensor.