Papers
Topics
Authors
Recent
Search
2000 character limit reached

Graph Learning with Distributional Edge Layouts

Published 26 Feb 2024 in cs.LG and cs.AI | (2402.16402v1)

Abstract: Graph Neural Networks (GNNs) learn from graph-structured data by passing local messages between neighboring nodes along edges on certain topological layouts. Typically, these topological layouts in modern GNNs are deterministically computed (e.g., attention-based GNNs) or locally sampled (e.g., GraphSage) under heuristic assumptions. In this paper, we for the first time pose that these layouts can be globally sampled via Langevin dynamics following Boltzmann distribution equipped with explicit physical energy, leading to higher feasibility in the physical world. We argue that such a collection of sampled/optimized layouts can capture the wide energy distribution and bring extra expressivity on top of WL-test, therefore easing downstream tasks. As such, we propose Distributional Edge Layouts (DELs) to serve as a complement to a variety of GNNs. DEL is a pre-processing strategy independent of subsequent GNN variants, thus being highly flexible. Experimental results demonstrate that DELs consistently and substantially improve a series of GNN baselines, achieving state-of-the-art performance on multiple datasets.

Definition Search Book Streamline Icon: https://streamlinehq.com
References (67)
  1. Shortest path networks for graph property prediction. In Learning on Graphs Conference, pp.  5–1. PMLR, 2022.
  2. On the bottleneck of graph neural networks and its practical implications. In International Conference on Learning Representations, 2020.
  3. Accurate learning of graph representations with graph multiset pooling. In International Conference on Learning Representations, 2021.
  4. Analyzing the expressive power of graph neural networks in a spectral perspective. In Proceedings of the International Conference on Learning Representations (ICLR), 2021a.
  5. Breaking the limits of message passing graph neural networks. In International Conference on Machine Learning, 2021b.
  6. Spectral clustering with graph neural networks for graph pooling. In International conference on machine learning, pp.  874–883. PMLR, 2020.
  7. Boltzmann, L. Studien uber das gleichgewicht der lebenden kraft. Wissenschafiliche Abhandlungen, 1:49–96, 1868.
  8. Improving graph neural network expressivity via subgraph isomorphism counting. IEEE Transactions on Pattern Analysis and Machine Intelligence, 45(1):657–668, 2022.
  9. Force-directed algorithms for schematic drawings and placement: A survey. Information Visualization, 19(1):65–91, 2020.
  10. Convolutional neural networks on graphs with fast localized spectral filtering. Advances in neural information processing systems, 2016.
  11. A generalization of transformer networks to graphs. DLG-AAAI, 2021.
  12. Graph drawing by force-directed placement. Software: Practice and experience, 21(11):1129–1164, 1991.
  13. Graph drawing by stress majorization. In Graph Drawing: 12th International Symposium, GD 2004, New York, NY, USA, September 29-October 2, 2004, Revised Selected Papers 12, pp.  239–250. Springer, 2005.
  14. Neural message passing for quantum chemistry. In International conference on machine learning, pp.  1263–1272. PMLR, 2017.
  15. Exploiting edge features for graph neural networks. In Proceedings of the IEEE/CVF conference on computer vision and pattern recognition, pp.  9211–9219, 2019.
  16. Inductive representation learning on large graphs. 30, 2017.
  17. Nodetrix: a hybrid visualization of social networks. IEEE transactions on visualization and computer graphics, 13(6):1302–1309, 2007.
  18. Graphmae: Self-supervised masked graph autoencoders. In Proceedings of the 28th ACM SIGKDD Conference on Knowledge Discovery and Data Mining, pp.  594–604, 2022.
  19. Open graph benchmark: Datasets for machine learning on graphs. Advances in neural information processing systems, 33:22118–22133, 2020.
  20. From local to global: Spectral-inspired graph neural networks. arXiv preprint arXiv:2209.12054, 2022.
  21. Spectrum-preserving sparsification for visualization of big graphs. Computers & Graphics, 87:89–102, 2020.
  22. Forceatlas2, a continuous graph layout algorithm for handy network visualization designed for the gephi software. PloS one, 9(6):e98679, 2014.
  23. Junction tree variational autoencoder for molecular graph generation. In International conference on machine learning, pp.  2323–2332. PMLR, 2018.
  24. Johnson, D. B. Efficient algorithms for shortest paths in sparse networks. Journal of the ACM (JACM), 24(1):1–13, 1977.
  25. An algorithm for drawing general undirected graphs. Information processing letters, 31(1):7–15, 1989.
  26. Semi-supervised classification with graph convolutional networks. 2016.
  27. Kobourov, S. G. Spring embedders and force directed graph drawing algorithms. arXiv preprint arXiv:1201.3011, 2012.
  28. Koren, Y. On spectral graph drawing. In International Computing and Combinatorics Conference, pp.  496–508. Springer, 2003.
  29. Rethinking graph transformers with spectral attention. Advances in Neural Information Processing Systems, 34:21618–21629, 2021.
  30. Distance encoding: Design provably more powerful neural networks for graph representation learning. Advances in Neural Information Processing Systems, 33:4465–4478, 2020.
  31. Hyperbolic graph neural networks. Advances in neural information processing systems, 32, 2019.
  32. Tudataset: A collection of benchmark datasets for learning with graphs. arXiv preprint arXiv:2007.08663, 2020.
  33. Attending to graph transformers. arXiv preprint arXiv:2302.04181, 2023.
  34. Munzner, T. Exploring large graphs in 3d hyperbolic space. IEEE computer graphics and applications, 18(4):18–23, 1998.
  35. Visualizing the structure of the world wide web in 3d hyperbolic space. In Proceedings of the first symposium on Virtual reality modeling language, pp.  33–38, 1995.
  36. Noack, A. Modularity clustering is force-directed layout. Physical review E, 79(2):026102, 2009.
  37. Parzen, E. On estimation of a probability density function and mode. The annals of mathematical statistics, 33(3):1065–1076, 1962.
  38. Recipe for a general, powerful, scalable graph transformer. Advances in Neural Information Processing Systems, 35:14501–14515, 2022.
  39. Explorative hyperbolic-tree-based clustering tool for unsupervised knowledge discovery. In 2016 14th international workshop on content-based multimedia indexing (CBMI), pp.  1–4. IEEE, 2016.
  40. Dropedge: Towards deep graph convolutional networks on node classification. In ICLR, 2020.
  41. Random features strengthen graph neural networks. In Proceedings of the 2021 SIAM international conference on data mining (SDM), pp.  333–341. SIAM, 2021.
  42. Weisfeiler-lehman graph kernels. Journal of Machine Learning Research, 12(9), 2011.
  43. Learning gradient fields for molecular conformation generation. In International conference on machine learning, pp.  9558–9568. PMLR, 2021.
  44. Point-gnn: Graph neural network for 3d object detection in a point cloud. In Proceedings of the IEEE/CVF conference on computer vision and pattern recognition, pp.  1711–1719, 2020.
  45. Masked label prediction: Unified message passing model for semi-supervised classification. arXiv preprint arXiv:2009.03509, 2020.
  46. Generative modeling by estimating gradients of the data distribution. Advances in neural information processing systems, 32, 2019.
  47. Does gnn pretraining help molecular representation? Advances in Neural Information Processing Systems, 35:12096–12109, 2022.
  48. High-density multi-fiber photometry for studying large-scale brain circuit dynamics. Nature methods, 16(6):553–560, 2019.
  49. Understanding over-squashing and bottlenecks on graphs via curvature. In ICLR, 2022.
  50. Graph attention networks. In International Conference on Learning Representations, 2018.
  51. How powerful are spectral graph neural networks. In International Conference on Machine Learning, pp.  23341–23362. PMLR, 2022.
  52. Public discourse and social network echo chambers driven by socio-cognitive biases. Physical Review X, 10(4):041042, 2020.
  53. Revisiting stress majorization as a unified framework for interactive constrained graph visualization. IEEE transactions on visualization and computer graphics, 24(1):489–499, 2017.
  54. Bayesian learning via stochastic gradient langevin dynamics. In Proceedings of the 28th international conference on machine learning (ICML-11), 2011.
  55. Structural entropy guided graph hierarchical pooling. In International conference on machine learning, pp.  24017–24030. PMLR, 2022.
  56. A comprehensive survey on graph neural networks. IEEE transactions on neural networks and learning systems, 32(1):4–24, 2020.
  57. Representing long-range context for graph neural networks with global attention. Advances in Neural Information Processing Systems, 34, 2021.
  58. Graph wavelet neural network. In ICLR, 2019a.
  59. How powerful are graph neural networks? In International Conference on Learning Representations, 2019b.
  60. Geodiff: A geometric diffusion model for molecular conformation generation. In ICLR, 2021.
  61. Learning deep graph matching with channel-independent embedding and hungarian attention. In International conference on learning representations, 2019.
  62. Rethinking the expressive power of gnns via graph biconnectivity. In The Eleventh International Conference on Learning Representations, 2022.
  63. Circuit-gnn: Graph neural networks for distributed circuit design. In International conference on machine learning, pp.  7364–7373. PMLR, 2019.
  64. Link prediction based on graph neural networks. Advances in neural information processing systems, 31, 2018.
  65. An end-to-end deep learning architecture for graph classification. In Proceedings of the AAAI conference on artificial intelligence, volume 32, 2018.
  66. Hyperbolic graph attention network. IEEE Transactions on Big Data, 8(6):1690–1701, 2021.
  67. Force-directed graph layouts revisited: a new force based on the t-distribution. IEEE Transactions on Visualization and Computer Graphics, 2023.

Summary

No one has generated a summary of this paper yet.

Paper to Video (Beta)

No one has generated a video about this paper yet.

Whiteboard

No one has generated a whiteboard explanation for this paper yet.

Open Problems

We haven't generated a list of open problems mentioned in this paper yet.

Continue Learning

We haven't generated follow-up questions for this paper yet.

Collections

Sign up for free to add this paper to one or more collections.