Papers
Topics
Authors
Recent
Search
2000 character limit reached

Toward Fairness via Maximum Mean Discrepancy Regularization on Logits Space

Published 20 Feb 2024 in cs.CV | (2402.13061v1)

Abstract: Fairness has become increasingly pivotal in machine learning for high-risk applications such as machine learning in healthcare and facial recognition. However, we see the deficiency in the previous logits space constraint methods. Therefore, we propose a novel framework, Logits-MMD, that achieves the fairness condition by imposing constraints on output logits with Maximum Mean Discrepancy. Moreover, quantitative analysis and experimental results show that our framework has a better property that outperforms previous methods and achieves state-of-the-art on two facial recognition datasets and one animal dataset. Finally, we show experimental results and demonstrate that our debias approach achieves the fairness condition effectively.

Definition Search Book Streamline Icon: https://streamlinehq.com
References (23)
  1. Mingliang Chen and Min Wu. 2020. Towards threshold invariant fair classification. In Conference on Uncertainty in Artificial Intelligence. PMLR, 560–569.
  2. Achieve Fairness without Demographics for Dermatological Disease Diagnosis. arXiv preprint arXiv:2401.08066 (2024).
  3. Fair Multi-Exit Framework for Facial Attribute Classification. arXiv preprint arXiv:2301.02989 (2023).
  4. Toward Fairness Through Fair Multi-Exit Framework for Dermatological Disease Diagnosis. arXiv preprint arXiv:2306.14518 (2023).
  5. Flexibly fair representation learning by disentanglement. In International conference on machine learning. PMLR, 1436–1445.
  6. Fairness through awareness. In Proceedings of the 3rd innovations in theoretical computer science conference. 214–226.
  7. A kernel two-sample test. The Journal of Machine Learning Research 13, 1 (2012), 723–773.
  8. Equality of opportunity in supervised learning. Advances in neural information processing systems 29 (2016).
  9. Learning fair classifiers with partially annotated group labels. In Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition. 10348–10357.
  10. Fair feature distillation for visual recognition. In Proceedings of the IEEE/CVF conference on computer vision and pattern recognition. 12115–12124.
  11. Kaggle. 2013. Dogs vs. Cats. (2013).
  12. Learning not to learn: Training deep neural networks with biased data. In Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition. 9012–9020.
  13. Large-scale celebfaces attributes (celeba) dataset. Retrieved August 15, 2018 (2018), 11.
  14. Gender bias in neural natural language processing. In Logic, Language, and Security. Springer, 189–202.
  15. Learning disentangled representation for fair facial attribute classification via fairness-aware information alignment. In Proceedings of the AAAI Conference on Artificial Intelligence, Vol. 35. 2403–2411.
  16. Fair Contrastive Learning for Facial Attribute Classification. In Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition. 10389–10398.
  17. Discovering fair representations in the data domain. In Proceedings of the IEEE/CVF conference on computer vision and pattern recognition. 8227–8236.
  18. Achieving equalized odds by resampling sensitive attributes. Advances in Neural Information Processing Systems 33 (2020), 361–371.
  19. The Larger the Fairer? Small Neural Networks Can Achieve Fairness for Edge Devices. In Proceedings of the 59th ACM/IEEE Design Automation Conference (San Francisco, California) (DAC ’22). 163–168. https://doi.org/10.1145/3489517.3530427
  20. Muffin: A Framework Toward Multi-Dimension AI Fairness by Uniting Off-the-Shelf Models. In 2023 60th ACM/IEEE Design Automation Conference (DAC). 1–6. https://doi.org/10.1109/DAC56929.2023.10247765
  21. Bernard W Silverman. 2018. Density estimation for statistics and data analysis. Routledge.
  22. Fairness-aware Adversarial Perturbation Towards Bias Mitigation for Deployed Deep Models. In Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition. 10379–10388.
  23. Age progression/regression by conditional adversarial autoencoder. In Proceedings of the IEEE conference on computer vision and pattern recognition. 5810–5818.

Summary

No one has generated a summary of this paper yet.

Paper to Video (Beta)

No one has generated a video about this paper yet.

Whiteboard

No one has generated a whiteboard explanation for this paper yet.

Open Problems

We haven't generated a list of open problems mentioned in this paper yet.

Continue Learning

We haven't generated follow-up questions for this paper yet.

Collections

Sign up for free to add this paper to one or more collections.