Monge-Ampere Regularization for Learning Arbitrary Shapes from Point Clouds
Abstract: As commonly used implicit geometry representations, the signed distance function (SDF) is limited to modeling watertight shapes, while the unsigned distance function (UDF) is capable of representing various surfaces. However, its inherent theoretical shortcoming, i.e., the non-differentiability at the zero level set, would result in sub-optimal reconstruction quality. In this paper, we propose the scaled-squared distance function (S${2}$DF), a novel implicit surface representation for modeling arbitrary surface types. S${2}$DF does not distinguish between inside and outside regions while effectively addressing the non-differentiability issue of UDF at the zero level set. We demonstrate that S${2}$DF satisfies a second-order partial differential equation of Monge-Ampere-type, allowing us to develop a learning pipeline that leverages a novel Monge-Ampere regularization to directly learn S${2}$DF from raw unoriented point clouds without supervision from ground-truth S${2}$DF values. Extensive experiments across multiple datasets show that our method significantly outperforms state-of-the-art supervised approaches that require ground-truth surface information as supervision for training. The source code is available at https://github.com/chuanxiang-yang/S2DF.
- A. Gropp, L. Yariv, N. Haim, M. Atzmon, and Y. Lipman, “Implicit geometric regularization for learning shapes,” in Proceedings of the 37th International Conference on Machine Learning, 2020, pp. 3789–3799.
- V. Sitzmann, J. Martel, A. Bergman, D. Lindell, and G. Wetzstein, “Implicit neural representations with periodic activation functions,” Advances in neural information processing systems, vol. 33, pp. 7462–7473, 2020.
- Y. Ben-Shabat, C. H. Koneputugodage, and S. Gould, “Digs: Divergence guided shape implicit neural representation for unoriented point clouds,” in Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition, 2022, pp. 19 323–19 332.
- Z. Wang, Y. Zhang, R. Xu, F. Zhang, P.-S. Wang, S. Chen, S. Xin, W. Wang, and C. Tu, “Neural-singular-hessian: Implicit neural representation of unoriented point clouds by enforcing singular hessian,” ACM Transactions on Graphics (TOG), vol. 42, no. 6, pp. 1–14, 2023.
- C. H. Koneputugodage, Y. Ben-Shabat, and S. Gould, “Octree guided unoriented surface reconstruction,” in Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition, 2023, pp. 16 717–16 726.
- H. Yang, Y. Sun, G. Sundaramoorthi, and A. Yezzi, “Stabilizing the optimization of neural signed distance functions and finer shape representation,” Advances in Neural Information Processing Systems, vol. 36, 2024.
- M. Fainstein, V. Siless, and E. Iarussi, “Dudf: Differentiable unsigned distance fields with hyperbolic scaling,” in Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition, 2024, pp. 4484–4493.
- N. S. Trudinger and X.-J. Wang, “The monge-ampère equation and its geometric applications,” Handbook of geometric analysis, vol. 1, pp. 467–524, 2008.
- Z. Huang, Y. Wen, Z. Wang, J. Ren, and K. Jia, “Surface reconstruction from point clouds: A survey and a benchmark,” arXiv preprint arXiv:2205.02413, 2022.
- M. Berger, A. Tagliasacchi, L. M. Seversky, P. Alliez, G. Guennebaud, J. A. Levine, A. Sharf, and C. T. Silva, “A survey of surface reconstruction from point clouds,” in Computer graphics forum, vol. 36. Wiley Online Library, 2017, pp. 301–329.
- R. Sulzer, R. Marlet, B. Vallet, and L. Landrieu, “A survey and benchmark of automatic surface reconstruction from point clouds,” arXiv preprint arXiv:2301.13656, 2023.
- Y. Ohtake, A. Belyaev, M. Alexa, G. Turk, and H.-P. Seidel, “Multi-level partition of unity implicits,” in ACM SIGGRAPH 2005 Courses, ser. SIGGRAPH ’05. New York, NY, USA: Association for Computing Machinery, 2005, p. 173–es. [Online]. Available: https://doi.org/10.1145/1198555.1198649
- H. Hoppe, T. DeRose, T. Duchamp, J. McDonald, and W. Stuetzle, “Surface reconstruction from unorganized points,” in Proceedings of the 19th annual conference on computer graphics and interactive techniques, 1992, pp. 71–78.
- A. C. Öztireli, G. Guennebaud, and M. Gross, “Feature preserving point set surfaces based on non-linear kernel regression,” Computer Graphics Forum, vol. 28, no. 2, pp. 493–501, 2009. [Online]. Available: https://onlinelibrary.wiley.com/doi/abs/10.1111/j.1467-8659.2009.01388.x
- R. Kolluri, “Provably good moving least squares,” ACM Transactions on Algorithms (TALG), vol. 4, no. 2, pp. 1–25, 2008.
- C. Shen, J. F. O’Brien, and J. R. Shewchuk, “Interpolating and approximating implicit surfaces from polygon soup,” in ACM SIGGRAPH 2004 Papers, ser. SIGGRAPH ’04. New York, NY, USA: Association for Computing Machinery, 2004, p. 896–904. [Online]. Available: https://doi.org/10.1145/1186562.1015816
- M. Alexa, J. Behr, D. Cohen-Or, S. Fleishman, D. Levin, and C. T. Silva, “Point set surfaces,” in Proceedings Visualization, 2001. VIS’01. IEEE, 2001, pp. 21–29.
- Z. Huang, N. Carr, and T. Ju, “Variational implicit point set surfaces,” ACM Transactions on Graphics (TOG), vol. 38, no. 4, pp. 1–13, 2019.
- J. C. Carr, R. K. Beatson, J. B. Cherrie, T. J. Mitchell, W. R. Fright, B. C. McCallum, and T. R. Evans, “Reconstruction and representation of 3d objects with radial basis functions,” in Proceedings of the 28th annual conference on Computer graphics and interactive techniques, 2001, pp. 67–76.
- M. Li, F. Chen, W. Wang, and C. Tu, “Sparse rbf surface representations,” Computer Aided Geometric Design, vol. 48, pp. 49–59, 2016.
- G. Turk and J. F. O’brien, “Modelling with implicit surfaces that interpolate,” ACM Transactions on Graphics (TOG), vol. 21, no. 4, pp. 855–873, 2002.
- M. Kazhdan, M. Bolitho, and H. Hoppe, “Poisson surface reconstruction,” in Proceedings of the fourth Eurographics symposium on Geometry processing, vol. 7, 2006.
- M. Kazhdan and H. Hoppe, “Screened poisson surface reconstruction,” ACM Transactions on Graphics (ToG), vol. 32, no. 3, pp. 1–13, 2013.
- M. Kazhdan, M. Chuang, S. Rusinkiewicz, and H. Hoppe, “Poisson surface reconstruction with envelope constraints,” in Computer graphics forum, vol. 39. Wiley Online Library, 2020, pp. 173–182.
- M. Bolitho, M. Kazhdan, R. Burns, and H. Hoppe, “Parallel poisson surface reconstruction,” in Advances in Visual Computing: 5th International Symposium, ISVC 2009, Las Vegas, NV, USA, November 30-December 2, 2009, Proceedings, Part I 5. Springer, 2009, pp. 678–689.
- F. Hou, C. Wang, W. Wang, H. Qin, C. Qian, and Y. He, “Iterative poisson surface reconstruction (ipsr) for unoriented points,” ACM Transactions on Graphics (TOG), vol. 41, no. 4, pp. 1–13, 2022.
- S. Sellán and A. Jacobson, “Stochastic poisson surface reconstruction,” ACM Transactions on Graphics (TOG), vol. 41, no. 6, pp. 1–12, 2022.
- S. Lin, D. Xiao, Z. Shi, and B. Wang, “Surface reconstruction from point clouds without normals by parametrizing the gauss formula,” ACM Transactions on Graphics, vol. 42, no. 2, pp. 1–19, 2022.
- J. Huang, H.-X. Chen, and S.-M. Hu, “A neural galerkin solver for accurate surface reconstruction,” ACM Transactions on Graphics (TOG), vol. 41, no. 6, pp. 1–16, 2022.
- S.-L. Liu, H.-X. Guo, H. Pan, P.-S. Wang, X. Tong, and Y. Liu, “Deep implicit moving least-squares functions for 3d reconstruction,” in Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition, 2021, pp. 1788–1797.
- L. Mescheder, M. Oechsle, M. Niemeyer, S. Nowozin, and A. Geiger, “Occupancy networks: Learning 3d reconstruction in function space,” in Proceedings of the IEEE/CVF conference on computer vision and pattern recognition, 2019, pp. 4460–4470.
- S. Peng, M. Niemeyer, L. Mescheder, M. Pollefeys, and A. Geiger, “Convolutional occupancy networks,” in Computer Vision–ECCV 2020: 16th European Conference, Glasgow, UK, August 23–28, 2020, Proceedings, Part III 16. Springer, 2020, pp. 523–540.
- A. Boulch and R. Marlet, “Poco: Point convolution for surface reconstruction,” in Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition, 2022, pp. 6302–6314.
- Z. Wang, S. Zhou, J. J. Park, D. Paschalidou, S. You, G. Wetzstein, L. Guibas, and A. Kadambi, “Alto: Alternating latent topologies for implicit 3d reconstruction,” in Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition, 2023, pp. 259–270.
- J. J. Park, P. Florence, J. Straub, R. Newcombe, and S. Lovegrove, “Deepsdf: Learning continuous signed distance functions for shape representation,” in Proceedings of the IEEE/CVF conference on computer vision and pattern recognition, 2019, pp. 165–174.
- C. Jiang, A. Sud, A. Makadia, J. Huang, M. Nießner, T. Funkhouser et al., “Local implicit grid representations for 3d scenes,” in Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition, 2020, pp. 6001–6010.
- R. Chabra, J. E. Lenssen, E. Ilg, T. Schmidt, J. Straub, S. Lovegrove, and R. Newcombe, “Deep local shapes: Learning local sdf priors for detailed 3d reconstruction,” in Computer Vision–ECCV 2020: 16th European Conference, Glasgow, UK, August 23–28, 2020, Proceedings, Part XXIX 16. Springer, 2020, pp. 608–625.
- M. Wang, Y.-S. Liu, Y. Gao, K. Shi, Y. Fang, and Z. Han, “Lp-dif: Learning local pattern-specific deep implicit function for 3d objects and scenes,” in Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition, 2023, pp. 21 856–21 865.
- M. Atzmon and Y. Lipman, “Sal: Sign agnostic learning of shapes from raw data,” in Proceedings of the IEEE/CVF conference on computer vision and pattern recognition, 2020, pp. 2565–2574.
- ——, “Sald: Sign agnostic learning with derivatives,” in International Conference on Learning Representations, 2020.
- B. Ma, Z. Han, Y.-S. Liu, and M. Zwicker, “Neural-pull: Learning signed distance function from point clouds by learning to pull space onto surface,” in International Conference on Machine Learning. PMLR, 2021, pp. 7246–7257.
- J. Chibane, G. Pons-Moll et al., “Neural unsigned distance fields for implicit function learning,” Advances in Neural Information Processing Systems, vol. 33, pp. 21 638–21 652, 2020.
- J. Chibane, T. Alldieck, and G. Pons-Moll, “Implicit functions in feature space for 3d shape reconstruction and completion,” in Proceedings of the IEEE/CVF conference on computer vision and pattern recognition, 2020, pp. 6970–6981.
- L. Wang, W. Chen, X. Meng, B. Yang, J. Li, L. Gao et al., “Hsdf: Hybrid sign and distance field for modeling surfaces with arbitrary topologies,” Advances in Neural Information Processing Systems, vol. 35, pp. 32 172–32 185, 2022.
- J. Ye, Y. Chen, N. Wang, and X. Wang, “Gifs: Neural implicit function for general shape representation,” in Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition, 2022, pp. 12 829–12 839.
- S. Ren, J. Hou, X. Chen, Y. He, and W. Wang, “Geoudf: Surface reconstruction from 3d point clouds via geometry-guided distance representation,” in Proceedings of the IEEE/CVF International Conference on Computer Vision, 2023, pp. 14 214–14 224.
- J. Zhou, B. Ma, Y.-S. Liu, Y. Fang, and Z. Han, “Learning consistency-aware unsigned distance functions progressively from raw point clouds,” Advances in neural information processing systems, vol. 35, pp. 16 481–16 494, 2022.
- J. Zhou, B. Ma, S. Li, Y.-S. Liu, Y. Fang, and Z. Han, “Cap-udf: Learning unsigned distance functions progressively from raw point clouds with consistency-aware field optimization,” IEEE Transactions on Pattern Analysis and Machine Intelligence, pp. 1–18, 2024.
- J. Zhou, B. Ma, S. Li, Y.-S. Liu, and Z. Han, “Learning a more continuous zero level set in unsigned distance fields through level set projection,” in Proceedings of the IEEE/CVF international conference on computer vision, 2023, pp. 3181–3192.
- Y. Lu, L. Wan, N. Ding, Y. Wang, S. Shen, S. Cai, and L. Gao, “Unsigned orthogonal distance fields: An accurate neural implicit representation for diverse 3d shapes,” in Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition, 2024, pp. 20 551–20 560.
- D. P. Kingma and J. Ba, “Adam: A method for stochastic optimization,” arXiv preprint arXiv:1412.6980, 2014.
- B. L. Bhatnagar, G. Tiwari, C. Theobalt, and G. Pons-Moll, “Multi-garment net: Learning to dress 3d people from images,” in Proceedings of the IEEE/CVF international conference on computer vision, 2019, pp. 5420–5430.
- F. Hou, X. Chen, W. Wang, H. Qin, and Y. He, “Robust zero level-set extraction from unsigned distance fields based on double covering,” ACM Transactions on Graphics (TOG), vol. 42, no. 6, pp. 1–15, 2023.
- A. X. Chang, T. Funkhouser, L. Guibas, P. Hanrahan, Q. Huang, Z. Li, S. Savarese, M. Savva, S. Song, H. Su et al., “Shapenet: An information-rich 3d model repository,” arXiv preprint arXiv:1512.03012, 2015.
- Q.-Y. Zhou and V. Koltun, “Dense scene reconstruction with points of interest,” ACM Transactions on Graphics (ToG), vol. 32, no. 4, pp. 1–8, 2013.
- P. Sun, H. Kretzschmar, X. Dotiwalla, A. Chouard, V. Patnaik, P. Tsui, J. Guo, Y. Zhou, Y. Chai, B. Caine et al., “Scalability in perception for autonomous driving: Waymo open dataset,” in Proceedings of the IEEE/CVF conference on computer vision and pattern recognition, 2020, pp. 2446–2454.
- Q. Xu, W. Wang, D. Ceylan, R. Mech, and U. Neumann, “Disn: Deep implicit surface network for high-quality single-view 3d reconstruction,” Advances in neural information processing systems, vol. 32, 2019.
- Q. Zhou and A. Jacobson, “Thingi10k: A dataset of 10,000 3d-printing models,” arXiv preprint arXiv:1605.04797, 2016.
Paper Prompts
Sign up for free to create and run prompts on this paper using GPT-5.
Top Community Prompts
Explain it Like I'm 14
Overview
This paper is about teaching a computer to rebuild 3D surfaces from scattered points in space, called a point cloud. The authors introduce a new way to represent shapes, called the scaled-squared distance function (S²DF), and a matching training rule, called Monge–Ampere regularization. Together, these let a neural network learn clean, detailed 3D surfaces of almost any kind—even open, thin, or broken shapes—directly from raw points, without needing extra labels like distances or normals.
What problem are they trying to solve?
Many modern methods represent a 3D surface by a function that tells you “how far am I from the surface?” Two popular versions are:
- Signed Distance Function (SDF): The distance is negative inside the object and positive outside. This is great for closed, watertight shapes, but it fails for open shapes (like clothing or chairs with holes).
- Unsigned Distance Function (UDF): The distance is always positive. This works for both open and closed shapes, but it has a sharp “kink” exactly on the surface (it’s not smooth there), which makes it hard for neural networks to learn accurately where the surface is.
The authors want a representation that:
- Works for any shape (open or closed),
- Is smooth exactly at the surface (no “kink”),
- Changes enough near the surface so the network can learn the surface precisely.
What are their key ideas?
To tackle this, the paper proposes two main ideas:
- S²DF: Instead of using the distance, use K × (distance)²
- Squaring the distance makes the function smooth (no kink) right on the surface.
- It doesn’t care about “inside” or “outside,” so it works for open and closed shapes.
- Multiplying by a big constant K (like 1000) makes small differences near the surface easier for a neural network to notice and learn.
- Monge–Ampere regularization: A math rule that S²DF should follow
- Think of it as a “physics law” telling the function how it must bend.
- In everyday terms: at any point where the function is smooth, S²DF has a predictable way it curves in one special direction. This becomes a neat rule the network can be trained to obey.
- Because of this rule, the method doesn’t need ground-truth distances. The only “labels” it needs are that the function should be zero at the input points (since those points are on the surface).
How does the method work? (Simple explanation)
Here’s the approach, step by step:
- Input: A point cloud—just a big set of 3D points on the object’s surface. No normals, no distances, no meshes.
- Goal: Learn a function f(x) such that f(x) = 0 exactly on the surface, and f(x) = K × (distance to surface)² elsewhere.
To make that happen, they train a small neural network per shape with a few simple rules:
- Monge–Ampere rule: They add a loss term that nudges the network so that the “curvature” of f (measured by its second derivatives) satisfies the special Monge–Ampere relation S²DF should follow. You can think of this as telling the network, “bend like a correct S²DF.”
- Dirichlet condition: At each input point (which lies on the surface), force f(x) = 0. This pins the surface to the point cloud.
- Neumann condition: At those same surface points, force the slope (gradient) of f to be zero. For S²DF this is true exactly on the surface, so this helps the network learn a sharper, cleaner surface.
- A safety term: Gently push points away from accidentally becoming zero off the surface, so the surface doesn’t spread everywhere (“non-manifold” artifacts).
During training, they also sample nearby points around the cloud (not just on it) so the network learns how the function behaves off the surface. After training, they extract the surface by finding where f(x) = 0, similar to how contour lines make a map from a height function.
Analogy: Imagine the surface is the “sea level” line at height 0. The function f(x) is like a smooth “height map” that’s perfectly flat at sea level, rises as you move away, and follows a strict rule about how it curves. The network learns this height map from the points that lie right at sea level.
What did they find?
Across many tests, the method produced very detailed, accurate surfaces and often beat other state-of-the-art methods, including some that required ground-truth distances:
- It works for open and thin shapes, like clothing (MGN dataset) and complex CAD parts (ShapeNet), where SDF-based methods often fail by “closing” holes or missing fine structures.
- It handles real scanned scenes (3D Scene dataset) and even large, noisy LiDAR scenes (Waymo), recovering sharper details that other methods smooth out.
- It also works well on watertight objects and can capture very fine features, like threads and thin rods, without needing normals.
In short: It reconstructs both open and closed shapes with high detail and does not need ground-truth distance or normal data.
Why is this important?
- Less supervision: You don’t need to precompute distances or normals, which can be hard, slow, or impossible for huge or noisy datasets.
- Handles “arbitrary shapes”: Works whether the surface is open, closed, thin, or complex—useful for clothing, furniture, CAD models, scanned scenes, and more.
- Better detail: The surfaces are sharper and more faithful to the original points, which helps with 3D graphics, AR/VR, robotics mapping, and autonomous driving.
Final thoughts and possible impact
This work shows that a carefully designed shape function (S²DF) plus a principled math rule (Monge–Ampere) can guide a neural network to learn surfaces directly from raw point clouds. That could:
- Make 3D reconstruction easier and more reliable across many industries,
- Reduce the need for expensive labeled data,
- Improve the quality of 3D content in games, movies, and VR,
- Help robots and self-driving cars build better maps of the world.
The authors also note open questions and practical considerations, like choosing the scale K, training a separate network per shape, and theoretical questions about uniqueness. But overall, the approach is a strong step toward flexible, high-quality surface learning from raw data.
Collections
Sign up for free to add this paper to one or more collections.