Papers
Topics
Authors
Recent
Search
2000 character limit reached

Simplicial SMOTE: Geometric Oversampling

Updated 10 February 2026
  • Simplicial SMOTE is an advanced oversampling method that leverages high-dimensional simplices to synthesize minority class samples from local convex hulls.
  • It generalizes classical SMOTE by sampling over p-simplices, thus offering improved distributional coverage and enabling closer approximation to the decision boundary.
  • Empirical evaluations demonstrate significant improvements in F₁-score and Matthew’s correlation coefficient across various benchmark and synthetic datasets.

Simplicial SMOTE is an advanced geometric oversampling algorithm for addressing class imbalance in supervised learning. Extending the original SMOTE paradigm, it employs high-dimensional simplicial complexes to generate synthetic minority class samples that more densely and flexibly cover the feature space. By sampling from convex hulls or simplices—rather than the edges—of a k-nearest-neighbor (kNN) graph, Simplicial SMOTE achieves improved local distributional coverage and enables algorithmic generalizations of several established SMOTE variants (Kachan et al., 5 Mar 2025).

1. Background and Motivation

SMOTE (Synthetic Minority Oversampling Technique) introduced a geometric mechanism for class balancing by interpolating new minority points between existing ones along the edges defined by their kNN graph. Though successful, SMOTE’s reliance on 1-simplices (edges) restricts the synthetic distribution to unions of line segments, leading to insufficient filling of high-dimensional or nonconvex minority class regions. Tools from topological data analysis, specifically Vietoris–Rips clique complexes, supply the theoretical framework for more expressive local models, motivating the construction of Simplicial SMOTE (Kachan et al., 5 Mar 2025).

2. Construction of Neighborhood Simplicial Complex

Given a minority-class sample set X+={x1,,xn+}RdX^+ = \{x_1,\ldots,x_{n^+}\} \subset \mathbb{R}^d, a symmetric kNN graph Gk=(X+,E)G_k = (X^+, E) is constructed using the binary relation:

(x,y)E    d(x,y)max{d(x,x(k)),d(y,y(k))}(x, y) \in E \iff d(x, y) \leq \max\{d(x, x_{(k)}), d(y, y_{(k)})\}

with x(k)x_{(k)} denoting the kk-th nearest neighbor of xx. The higher-order neighborhood geometry is captured via the clique (Vietoris–Rips) complex K(Gk)K(G_k), whose pp-simplices are all subsets σX+\sigma \subset X^+ of cardinality p+1p+1 where every pair is an edge in GkG_k. The pp-skeleton Kp(Gk)K_p(G_k) includes all simplices of dimension at most pp.

3. Simplicial SMOTE Algorithm

Simplicial SMOTE synthesizes minority class points by uniformly sampling from the convex hulls (simplices) in Kp(Gk)K_p(G_k). The algorithm proceeds as follows:

  • Input: Minority points X+X^+, neighborhood size kk, maximum simplex dimension pp, target oversampling m=nn+m = n^- - n^+.
  • Build GkG_k, enumerate all maximal simplices ΣpMAX\Sigma_p^{\text{MAX}} in Kp(Gk)K_p(G_k).
  • For each synthetic point:
    • Uniformly sample a maximal simplex σ(p)\sigma^{(p')} from ΣpMAX\Sigma_p^{\text{MAX}}.
    • Draw barycentric weights λDirichlet(1,,1)\lambda \sim \text{Dirichlet}(1,\ldots,1) (uniform on pp'-simplex).
    • Compute x^=i=0pλixvi\hat{x} = \sum_{i=0}^{p'} \lambda_i x_{v_i} for the sampled vertices.
    • Append x^\hat{x} to the augmented set X+X^+.

This process generalizes the original SMOTE’s pairwise interpolation to convex combinations over arbitrary local neighborhoods, ensuring that synthetic instances can reside anywhere in the convex hull of up to p+1p+1 close minority points (Kachan et al., 5 Mar 2025).

4. Comparison with Classical SMOTE

The classical SMOTE algorithm samples only from 1-simplices, each synthetic point being x^=αxi+(1α)xj\hat{x} = \alpha x_i + (1-\alpha) x_j with αUniform[0,1]\alpha \sim \mathrm{Uniform}[0,1]. In contrast, Simplicial SMOTE generalizes sampling to unions of pp-simplices (convex hulls of p+1p+1 points), drastically expanding the local model to higher-dimensional domains.

Empirically and theoretically, this confers two principal advantages:

  • Distributional coverage: High-dimensional simplices densely fill the local convex region, minimizing gaps present in purely edge-based models.
  • Boundary proximity: For a set of p+1p+1 equidistant minority points, the distance from the origin to their convex hull (e.g., d2=1/3d_2 = 1/\sqrt{3} for a triangle) is strictly less than the distance to any edge (d1=1/2d_1 = 1/\sqrt{2}), permitting a closer approximation to the minority-majority decision boundary (Kachan et al., 5 Mar 2025).

5. Simplicial Extensions of SMOTE Variants

Simplicial SMOTE’s geometric data model enables direct generalization of graph-based SMOTE extensions:

  • Simplicial Borderline SMOTE: Restricts simplex selection to the “borderline set” B={xi0<k+<k/2}B = \{ x_i \mid 0 < k^+ < k/2 \} and their minority neighbors, sampling only from Kp(B)K_p(B).
  • Simplicial Safe-level SMOTE: For a simplex σ={x0,,xp}\sigma = \{ x_0, \ldots, x_p \}, Dirichlet parameters are set as αi=1/Δ+(xi)\alpha_i = 1/\Delta^+(x_i) (where Δ+(xi)=k+/k\Delta^+(x_i) = k^+ / k), biasing the sampling toward safe minority regions.
  • Simplicial ADASYN: Assigns each simplex σ\sigma an adaptivity weight w(σ)=i=0pΔ(xi)/(p+1)w(\sigma) = \sum_{i=0}^p \Delta^-(x_i) / (p+1), allocating more synthetic samples to neighborhoods with higher majority presence.

These extensions maintain the simplicial sampling core, with modifications to either the sampling domain, barycentric weight distribution, or frequency per simplex (Kachan et al., 5 Mar 2025).

6. Theoretical Properties and Empirical Performance

The convex union of pp-simplices more accurately models the local cluster hulls of minority data. As pp approaches the intrinsic dimensionality of a cluster, the average projection distance from majority points to the minority convex domain decreases, allowing the local decision boundary to move closer to the majority.

Empirical results across 21 UCI/LIBSVM benchmark datasets (dimensions 7–294, imbalance ratios 9–130) and 4 synthetic topological datasets (moons, Swiss rolls, concentric spheres, Gaussian in sphere) demonstrate:

  • Simplicial SMOTE yields mean F₁-score improvements over SMOTE of approximately 4.5% for k-NN classifiers (up to +29.3% on “car_eval_4”) and 5.0% for gradient boosting (up to +25.7% on “oil”).
  • Consistent improvements are observed in Matthew’s correlation coefficient.
  • Simplicial forms of Borderline SMOTE, Safe-level SMOTE, and ADASYN outperform their classical counterparts.
  • On synthetic data with complex topology, non-local sampling methods (e.g., global or Gaussian oversampling) fail, while Simplicial SMOTE achieves the best F₁-score (Kachan et al., 5 Mar 2025).

7. Context and Implications

Simplicial SMOTE generalizes the SMOTE framework by leveraging higher-order geometric and topological constructs to obtain a more representative and flexible sampling of minority class regions. A plausible implication is that this approach could be extended beyond binary class imbalance to structured, multi-class, or manifold-based learning problems. The empirical superiority of simplicial variants across diverse architectures and data modalities suggests broad utility for imbalanced learning scenarios where local minority structure is crucial (Kachan et al., 5 Mar 2025).

Definition Search Book Streamline Icon: https://streamlinehq.com
References (1)

Topic to Video (Beta)

No one has generated a video about this topic yet.

Whiteboard

No one has generated a whiteboard explanation for this topic yet.

Follow Topic

Get notified by email when new papers are published related to Simplicial SMOTE.