Commutativity and Disentanglement from the Manifold Perspective
Abstract: In this paper, we interpret disentanglement as the discovery of local charts of the data manifold and trace how this definition naturally leads to an equivalent condition for disentanglement: commutativity between factors of variation. We study the impact of this manifold framework to two classes of problems: learning matrix exponential operators and compressing data-generating models. In each problem, the manifold perspective yields interesting results about the feasibility and fruitful approaches their solutions. We also link our manifold framework to two other common disentanglement paradigms: group theoretic and probabilistic approaches to disentanglement. In each case, we show how these frameworks can be merged with our manifold perspective. Importantly, we recover commutativity as a central property in both alternative frameworks, further highlighting its importance in disentanglement.
- Y. Bengio. Deep learning of representations: Looking forward, 2013.
- Representation learning: A review and new perspectives, 2014.
- Why deep learning works: A manifold disentanglement perspective. IEEE Transactions on Neural Networks and Learning Systems, 27(10):1997–2008, 2016. doi: 10.1109/TNNLS.2015.2496947.
- Understanding disentangling in β𝛽\betaitalic_β-vae, 2018.
- Disentangling images with lie group transformations and sparse coding. Arxiv, 2020.
- The sparse manifold transform. In S. Bengio, H. Wallach, H. Larochelle, K. Grauman, N. Cesa-Bianchi, and R. Garnett, editors, Advances in Neural Information Processing Systems, volume 31. Curran Associates, Inc., 2018.
- T. Cohen and M. Welling. Learning the irreducible representations of commutative lie groups. In Proceedings of the 31st International Conference on International Conference on Machine Learning - Volume 32, ICML’14, page II–1755–II–1763. JMLR.org, 2014.
- Testing the manifold hypothesis. Journal of the American Mathematical Society, 29:983–1049, 2016. doi: 10.1090/jams/852.
- A. Goyal and Y. Bengio. Inductive biases for deep learning of higher-level cognition. Proceedings of the Royal Society A: Mathematical, Physical and Engineering Sciences, 478(2266):20210068, 2022. doi: 10.1098/rspa.2021.0068.
- B. C. Hall. Lie Groups, Lie Algebras, and Representations: An Elementary Introduction, volume 222 of Graduate Texts in Mathematics. Springer, New York, 2010. ISBN 9781441923134.
- beta-vae: Learning basic visual concepts with a constrained variational framework. In 5th International Conference on Learning Representations, ICLR 2017, Toulon, France, April 24-26, 2017, Conference Track Proceedings. OpenReview.net, 2017.
- Towards a definition of disentangled representations. Arxiv, 2018.
- A. Hyvärinen and E. Oja. Independent component analysis: algorithms and applications. Neural Networks, 13(4):411–430, 2000. ISSN 0893-6080. doi: https://doi.org/10.1016/S0893-6080(00)00026-5.
- H. Kim and A. Mnih. Disentangling by factorising. In J. Dy and A. Krause, editors, Proceedings of the 35th International Conference on Machine Learning, volume 80 of Proceedings of Machine Learning Research, pages 2649–2658. PMLR, 10–15 Jul 2018.
- J. Lee. Introduction to Riemannian Manifolds. Graduate Texts in Mathematics. Springer International Publishing, 2019. ISBN 9783319917542.
- J. M. Lee. Smooth Manifolds. Springer New York, New York, NY, 2003. doi: 10.1007/978-0-387-21752-9˙1.
- Challenging common assumptions in the unsupervised learning of disentangled representations. In K. Chaudhuri and R. Salakhutdinov, editors, Proceedings of the 36th International Conference on Machine Learning, volume 97 of Proceedings of Machine Learning Research, pages 4114–4124. PMLR, 09–15 Jun 2019.
- X. Miao and R. P. N. Rao. Learning the lie groups of visual invariance. Neural Comput., 19(10):2665–2693, Oct. 2007.
- C. Moler and C. Van Loan. Nineteen dubious ways to compute the exponential of a matrix, twenty-five years later. SIAM Review, 45(1):3–49, 2003. doi: 10.1137/S00361445024180.
- Learning to disentangle factors of variation with manifold interaction. In E. P. Xing and T. Jebara, editors, Proceedings of the 31st International Conference on Machine Learning, volume 32 of Proceedings of Machine Learning Research, pages 1431–1439, Bejing, China, 22–24 Jun 2014. PMLR.
- Nonlinear dimensionality reduction by locally linear embedding. Science, 290(5500):2323–2326, 2000. doi: 10.1126/science.290.5500.2323.
- An unsupervised algorithm for learning lie group transformations, 2017.
- A global geometric framework for nonlinear dimensionality reduction. Science, 290(5500):2319–2323, 2000. doi: 10.1126/science.290.5500.2319.
- Self-supervised learning disentangled group representation as feature. In M. Ranzato, A. Beygelzimer, Y. Dauphin, P. Liang, and J. W. Vaughan, editors, Advances in Neural Information Processing Systems, volume 34, pages 18225–18240. Curran Associates, Inc., 2021.
- C. Xiao and L. Liu. Generative flows with matrix exponential. In H. D. III and A. Singh, editors, Proceedings of the 37th International Conference on Machine Learning, volume 119 of Proceedings of Machine Learning Research, pages 10452–10461. PMLR, 13–18 Jul 2020.
- Q. Xu and D. Ma. Applications of lie groups and lie algebra to computer vision: A brief survey. In 2012 International Conference on Systems and Informatics (ICSAI2012), pages 2024–2029, 2012. doi: 10.1109/ICSAI.2012.6223449.
- N. Zheng and J. Xue. Manifold Learning, pages 87–119. Springer London, London, 2009. ISBN 978-1-84882-312-9. doi: 10.1007/978-1-84882-312-9˙4.
- Evaluating the disentanglement of deep generative models through manifold topology, 2021.
Paper Prompts
Sign up for free to create and run prompts on this paper using GPT-5.
Top Community Prompts
Collections
Sign up for free to add this paper to one or more collections.