Papers
Topics
Authors
Recent
Search
2000 character limit reached

FairLRF: Achieving Fairness through Sparse Low Rank Factorization

Published 20 Nov 2025 in cs.LG | (2511.16549v1)

Abstract: As deep learning (DL) techniques become integral to various applications, ensuring model fairness while maintaining high performance has become increasingly critical, particularly in sensitive fields such as medical diagnosis. Although a variety of bias-mitigation methods have been proposed, many rely on computationally expensive debiasing strategies or suffer substantial drops in model accuracy, which limits their practicality in real-world, resource-constrained settings. To address this issue, we propose a fairness-oriented low rank factorization (LRF) framework that leverages singular value decomposition (SVD) to improve DL model fairness. Unlike traditional SVD, which is mainly used for model compression by decomposing and reducing weight matrices, our work shows that SVD can also serve as an effective tool for fairness enhancement. Specifically, we observed that elements in the unitary matrices obtained from SVD contribute unequally to model bias across groups defined by sensitive attributes. Motivated by this observation, we propose a method, named FairLRF, that selectively removes bias-inducing elements from unitary matrices to reduce group disparities, thus enhancing model fairness. Extensive experiments show that our method outperforms conventional LRF methods as well as state-of-the-art fairness-enhancing techniques. Additionally, an ablation study examines how major hyper-parameters may influence the performance of processed models. To the best of our knowledge, this is the first work utilizing SVD not primarily for compression but for fairness enhancement.

Summary

  • The paper introduces FairLRF, a novel framework leveraging sparse low rank factorization and Hessian-based scoring to mitigate fairness bias in neural models.
  • It employs truncated SVD and group-conditioned metrics to selectively sparsify model weights without retraining, achieving improved equalized odds and opportunity.
  • Experimental results on CelebA and Fitzpatrick-17k show FairLRF improves precision and compression while significantly reducing bias compared to baselines.

FairLRF: Achieving Fairness through Sparse Low Rank Factorization

Overview and Motivation

FairLRF introduces a novel paradigm for enhancing fairness in deep learning (DL) models via sparse low rank factorization (LRF), leveraging singular value decomposition (SVD) beyond its conventional use for model compression. While prior work focused on pruning and quantization to promote fairness, FairLRF uniquely exploits the underlying structure of neural weight matrices to mitigate bias, specifically addressing disparities arising from sensitive attributes. The framework targets resource-constrained deployments, such as in medical diagnosis, where both efficiency and fairness are critical.

Methodology

Problem Definition and Metrics

FairLRF operates on classification tasks where data points are annotated with both target and sensitive attributes. Fairness enhancement is quantitatively assessed using equalized opportunity (recall rate differences across groups) and equalized odds (aggregate of differences in recall and false positive rates), as formalized by Hardt et al. These metrics facilitate objective comparisons of model outcomes across privileged and unprivileged groups.

Sparse SVD for Fairness

Traditional SVD decomposes a layer's weight matrix WW into three matrices: UU, SS, and VV, allowing for rank reduction (truncated SVD) and model compression with negligible impact on predictive performance, as validated on VGG-11 and the Fitzpatrick-17k dataset. Figure 1

Figure 1: Performance of truncated SVD with different ranks kk illustrates negligible precision and fairness degradation until aggressive rank reduction.

Building upon prior work showing redundancy in DL weight matrices, FairLRF introduces a critical observation: the contribution of individual elements in SVD’s unitary matrices to prediction bias is group-dependent. The method computes Hessian-based scores to assess the impact of weight removal on fairness for each demographic group, following a tailored version of the Taylor series approximation for loss change. Figure 2

Figure 2

Figure 2: Distributions of Hessian values for privileged and unprivileged groups reveal actionable differences in bias contributions.

Fairness-Oriented Sparse SVD

By measuring weight-wise importance via group-conditioned Hessians, FairLRF constructs fairness-aware scores:

si=12θi2(hii0−βhii1)s_i = \frac{1}{2}\theta_i^2 \left(h_{ii}^0 - \beta h_{ii}^1\right)

where hiich_{ii}^c is computed per group cc. The framework selectively sparsifies rows/columns associated with high bias contributions, removing those with minimal impact on privileged groups while maximizing the reduction for the unprivileged.

Workflow and Implementation

The complete workflow encompasses:

  1. Truncated SVD on a pre-trained network.
  2. Group-specific Hessian-based scoring using inference on sampled data.
  3. Calculation of fairness-aware weight scores.
  4. Guided sparse SVD for targeted compression and fairness gains. Figure 3

    Figure 3: The FairLRF pipeline integrates SVD, group-based scoring, and guided sparsification for fairness-driven model optimization.

FairLRF obviates the need for retraining or fine-tuning, unlike methods such as FairQuantize, due to the theoretical guarantees provided by SVD structure.

Experimental Evaluation

Datasets and Settings

Experiments utilize CelebA (focusing on gender) and Fitzpatrick-17k (focusing on skin tone), covering broad and clinical image classification scenarios. VGG-11, with its dense fully connected layers, serves as the model backbone. FairLRF is compared against truncated SVD, sparse SVD (absolute-weight/activation strategies), FairPrune, and FairQuantize using precision, recall, F1-score, equalized opportunity, equalized odds, and compression rate.

Results on CelebA

FairLRF achieves the lowest equalized odds of all methods (0.029 versus 0.032 for vanilla, 0.034 for truncated SVD, and 0.034 for FairQuantize), with negligible precision loss. Compression rate improvements are on par with other SVD-based methods, demonstrating effectiveness without sacrificing model efficiency.

Results on Fitzpatrick-17k

On the clinically relevant Fitzpatrick-17k benchmark, FairLRF further distinguishes itself, attaining an equalized opportunity of 0.264 and equalized odds of 0.132, outperforming both conventional and fairness-oriented baselines. Notably, the method reaches a 6.2% higher average precision than FairQuantize and superior compression efficacy.

Hyperparameter and Layer Analysis

Ablation studies detail the effects of sparsity rate (srsr), reduction rate (rrrr), and score trade-off parameter (β\beta), confirming the robustness of FairLRF to hyperparameter choices, with srsr being most critical, and highlight the importance of layer selection for practical deployments. Figure 4

Figure 4

Figure 4

Figure 4: Precision/fairness metrics across hyperparameters reveal FairLRF's stable performance and sensitivity profile.

Implications and Future Directions

Theoretical implications include the extension of SVD's utility from pure compression to active fairness control, suggesting novel directions for matrix factorization in ethical ML. Practically, FairLRF's independence from retraining and compatibility with standard neural architectures positions it for deployment in edge devices and real-world decision systems.

Potential future enhancements include automated hyperparameter optimization, adaptation to multi-class sensitive attributes, and integration into multi-layer or entire-model fairness pipelines. There is also scope for extending the approach to non-linear model architectures and further theoretical analysis of its bias-minimizing effect in adversarial settings.

Conclusion

FairLRF presents an effective approach for fairness enhancement in deep neural networks, utilizing group-aware sparse SVD guided by Hessian-based scoring. Results on both general and medical image classification tasks validate its superiority over existing compression and fairness techniques, balancing accuracy, compression, and equity without additional retraining. The method offers a practical, theoretically grounded tool for deploying fair DL models in sensitive and resource-limited contexts, with significant opportunities for both further empirical and theoretical refinement.

Paper to Video (Beta)

No one has generated a video about this paper yet.

Whiteboard

No one has generated a whiteboard explanation for this paper yet.

Open Problems

We found no open problems mentioned in this paper.

Authors (3)

Collections

Sign up for free to add this paper to one or more collections.