Papers
Topics
Authors
Recent
Search
2000 character limit reached

On the Effectiveness of Hybrid Pooling in Mixup-Based Graph Learning for Language Processing

Published 6 Oct 2022 in cs.LG and cs.AI | (2210.03123v3)

Abstract: Graph neural network (GNN)-based graph learning has been popular in natural language and programming language processing, particularly in text and source code classification. Typically, GNNs are constructed by incorporating alternating layers which learn transformations of graph node features, along with graph pooling layers that use graph pooling operators (e.g., Max-pooling) to effectively reduce the number of nodes while preserving the semantic information of the graph. Recently, to enhance GNNs in graph learning tasks, Manifold-Mixup, a data augmentation technique that produces synthetic graph data by linearly mixing a pair of graph data and their labels, has been widely adopted. However, the performance of Manifold-Mixup can be highly affected by graph pooling operators, and there have not been many studies that are dedicated to uncovering such affection. To bridge this gap, we take an early step to explore how graph pooling operators affect the performance of Mixup-based graph learning. To that end, we conduct a comprehensive empirical study by applying Manifold-Mixup to a formal characterization of graph pooling based on 11 graph pooling operations (9 hybrid pooling operators, 2 non-hybrid pooling operators). The experimental results on both natural language datasets (Gossipcop, Politifact) and programming language datasets (JAVA250, Python800) demonstrate that hybrid pooling operators are more effective for Manifold-Mixup than the standard Max-pooling and the state-of-the-art graph multiset transformer (GMT) pooling, in terms of producing more accurate and robust GNN models.

Definition Search Book Streamline Icon: https://streamlinehq.com
References (15)
  1. doi:10.18653/v1/D19-1345. URL https://aclanthology.org/D19-1345
  2. doi:10.1109/TNNLS.2020.2978386.
  3. doi:10.1109/SANER48275.2020.9054857.
  4. doi:10.1109/SANER56733.2023.00043.
  5. doi:10.18653/v1/2020.acl-main.31. URL https://aclanthology.org/2020.acl-main.31
  6. doi:10.18653/v1/2020.coling-main.305. URL https://aclanthology.org/2020.coling-main.305
  7. doi:10.18653/v1/2020.acl-main.194. URL https://aclanthology.org/2020.acl-main.194
  8. doi:10.18653/v1/2020.emnlp-main.691. URL https://aclanthology.org/2020.emnlp-main.691
  9. doi:10.18653/v1/2021.findings-acl.285. URL https://aclanthology.org/2021.findings-acl.285
  10. doi:10.18653/v1/2020.coling-main.435. URL https://aclanthology.org/2020.coling-main.435
  11. doi:https://doi.org/10.1016/j.jss.2022.111304. URL https://www.sciencedirect.com/science/article/pii/S0164121222000541
  12. doi:10.1007/s10994-011-5268-1.
  13. doi:10.1016/j.cie.2015.06.009. URL https://doi.org/10.1016/j.cie.2015.06.009
  14. doi:10.3390/math9080895. URL https://www.mdpi.com/2227-7390/9/8/895
  15. doi:10.1145/3404835.3462990. URL https://doi.org/10.1145/3404835.3462990
Citations (1)

Summary

Whiteboard

No one has generated a whiteboard explanation for this paper yet.

Open Problems

We haven't generated a list of open problems mentioned in this paper yet.

Continue Learning

We haven't generated follow-up questions for this paper yet.

Collections

Sign up for free to add this paper to one or more collections.