Locality Preserving Loss in Representation Learning
- Locality Preserving Loss (LPL) is a regularization approach that preserves local geometric and affinity structures by leveraging graph Laplacian formulations.
- It extends to linear mappings, kernel methods, and deep learning architectures, enhancing latent space representations and unsupervised clustering performance.
- Empirical studies show that LPL improves manifold alignment and topological consistency, making it valuable for both dimensionality reduction and embedding alignment.
Locality Preserving Loss (LPL) refers to a class of regularization and objective functions that explicitly promote the preservation of local geometric or affinity structure in mapping, embedding, and representation learning tasks. LPL has its earliest mathematical foundation in the graph Laplacian-based methods such as Laplacian Eigenmaps and Locality Preserving Projection (LPP), but has since been adapted to suit a variety of deep learning and manifold learning contexts, including deep autoencoders, variational autoencoders, cross-manifold alignment, and representation learning in high-dimensional data.
1. Foundational Formulation: Graph-Based Locality Preserving Loss
The original instantiation of LPL arises from Laplacian-based dimensionality reduction and spectral learning frameworks. Given a dataset , a sparse nearest-neighbor graph (constructed via -ball or -NN) with adjacency (affinity) matrix is built, where a prototypical choice is if is a neighbor of , $0$ otherwise. The corresponding degree matrix and unnormalized Laplacian are then defined, with positive semi-definite and rows summing to zero.
The Locality Preserving Loss is
where are the low-dimensional codes and is the matrix stacking them. Minimization is subject to a normalization constraint ( or ) to avoid the trivial solution. The solution is equivalently the Laplacian eigenmap—embedding the data according to the nontrivial smallest eigenvectors of (Ghojogh et al., 2021).
2. Linear and Kernel Extensions: Locality Preserving Projection
LPL extends directly to linear mappings, yielding Locality Preserving Projection (LPP):
- Linear case: , , and . The loss becomes , subject to . The solution is given by the generalized eigenproblem .
- Kernel case: With a feature map and Gram matrix , one solves for , with the embedding (Ghojogh et al., 2021).
Out-of-sample extensions differ: linear LPP allows for new points, kernel LPP computes for .
3. Locality Preserving Loss in Deep Learning and Autoencoders
Recent methods generalize LPL to deep representation learning, integrating it with autoencoder frameworks:
- In (Chen et al., 2019), LPL is defined as
where are latent encodings and are affinities constructed from pretrained (autoencoder) latents. The prior affinity matrix is built per-column by minimizing subject to , yielding a sparse -NN structure. LPL is incorporated into the end-to-end fine-tuning loss as
with weighting LPL. Empirical ablations demonstrate substantially improved unsupervised clustering performance (ACC gains of 10–15%) with LPL inclusion.
- In (Chen et al., 2022), LPL is formulated via a continuous -NN graph (CkNN), considering both data- and latent-space graphs. The loss:
where and are adjacency matrices on data and latent spaces, and is a learned scaling parameter. The algorithm treats LPL as the primary objective, with reconstruction as a constraint, and extends to hierarchical VAEs.
4. Locality Preserving Loss in Embedding Alignment
(Ganesan et al., 2020) introduces an LPL for supervised or semi-supervised alignment of vector space manifolds (e.g., cross-lingual embeddings):
- For source embeddings and target , with paired anchors , is trained to minimize alignment MSE and
where are locally linear reconstruction weights (from Locally Linear Embedding) for from its neighbors. The total objective combines MSE (for alignment), LPL (for locality preservation), LLE (for learning ), and an orthogonality regularizer (for stability in linear mappings). LPL is empirically shown to improve alignment, particularly under limited supervision, by increasing effective training sample utilization and providing graph Laplacian-like smoothness regularization.
5. Graph Construction: Affinity and Topology Preservation
Across all applications, the construction of the affinity/adjacency structure—whether via classic -NN, heat kernel, or CkNN—is fundamental:
| Paper | Graph Construction | Affinity Matrix |
|---|---|---|
| (Ghojogh et al., 2021) | -ball or -NN | or |
| (Chen et al., 2019) | -NN on pretrained latents | via local quadratic program, |
| (Chen et al., 2022) | CkNN (density-adaptive -NN) | iff |
The CkNN affords spectral convergence to the Laplace–Beltrami operator, ensuring that the induced graph accurately reflects the intrinsic topology of the underlying data manifold, including homological features such as connected components and cycles (Chen et al., 2022).
6. Theoretical Motivation and Guarantees
The theoretical grounding of LPL is rooted in spectral graph theory and manifold learning:
- The LPL objective is equivalent to minimizing a quadratic form in the graph Laplacian, penalizing separating neighbors in the embedding space.
- In deep learning extensions, LPL acts as a regularizer that aligns the learned manifold structure with a precomputed local geometry, or ensures that encoder/decoder mappings do not collapse or distort local metric neighborhoods.
- In the CkNN setting, the adjacency graph is guaranteed (in large-sample limits) to yield a Laplacian converging to the manifold's Laplace–Beltrami operator, underpinning homological/topological consistency (Chen et al., 2022).
- In alignment contexts, LPL effectively expands the annotated training set by manifold-based interpolation, reducing overfitting and encouraging locally smooth mappings (Ganesan et al., 2020).
7. Practical Implementation and Empirical Impact
Algorithmic strategies differ per application:
- Laplacian eigenmaps and LPP involve solving (generalized) eigenproblems of or cost, but can be handled efficiently for sparse/Laplacian matrices (Ghojogh et al., 2021).
- Deep autoencoder training with LPL integrates local graph construction (potentially in minibatch), gradient-based optimization, and, in CkNN, adaptive neighborhood thresholds (Chen et al., 2022).
- Affinity matrices may be fixed (built from pretrained representations) or dynamic (rebuilt per iteration/batch).
- Hyperparameters such as (neighborhood size), (kernel width), (relative LPL weight), and (CkNN scale) are routinely cross-validated; improper choices can induce graph disconnectivity or wash out locality (Ghojogh et al., 2021).
Empirically, LPL consistently improves the preservation of local geometric structure in the latent space, as assessed by trustworthiness, continuity, MRRE, clustering accuracy, or alignment benchmarks, with pronounced gains in data-scarce or high-complexity regimes (Chen et al., 2019, Ganesan et al., 2020, Chen et al., 2022).
References
- "Laplacian-Based Dimensionality Reduction Including Spectral Clustering, Laplacian Eigenmap, Locality Preserving Projection, Graph Embedding, and Diffusion Map: Tutorial and Survey" (Ghojogh et al., 2021).
- "Generative approach to unsupervised deep local learning" (Chen et al., 2019).
- "Locality Preserving Loss: Neighbors that Live together, Align together" (Ganesan et al., 2020).
- "Local Distance Preserving Auto-encoders using Continuous k-Nearest Neighbours Graphs" (Chen et al., 2022).