Papers
Topics
Authors
Recent
Search
2000 character limit reached

ITEM: Improving Training and Evaluation of Message-Passing based GNNs for top-k recommendation

Published 3 Jul 2024 in cs.IR and cs.LG | (2407.07912v1)

Abstract: Graph Neural Networks (GNNs), especially message-passing-based models, have become prominent in top-k recommendation tasks, outperforming matrix factorization models due to their ability to efficiently aggregate information from a broader context. Although GNNs are evaluated with ranking-based metrics, e.g NDCG@k and Recall@k, they remain largely trained with proxy losses, e.g the BPR loss. In this work we explore the use of ranking loss functions to directly optimize the evaluation metrics, an area not extensively investigated in the GNN community for collaborative filtering. We take advantage of smooth approximations of the rank to facilitate end-to-end training of GNNs and propose a Personalized PageRank-based negative sampling strategy tailored for ranking loss functions. Moreover, we extend the evaluation of GNN models for top-k recommendation tasks with an inductive user-centric protocol, providing a more accurate reflection of real-world applications. Our proposed method significantly outperforms the standard BPR loss and more advanced losses across four datasets and four recent GNN architectures while also exhibiting faster training. Demonstrating the potential of ranking loss functions in improving GNN training for collaborative filtering tasks.

Definition Search Book Streamline Icon: https://streamlinehq.com
References (50)
  1. On the Bottleneck of Graph Neural Networks and its Practical Implications. 2020. doi: 10.48550/arxiv.2006.05205.
  2. Nabiha Asghar. Yelp dataset challenge: Review rating prediction. 2016.
  3. Smooth-AP: Smoothing the Path Towards Large-Scale Image Retrieval. 2020.
  4. Revisiting approximate metric optimization in the age of deep neural networks. In ACM SIGIR, 2019.
  5. Learning to rank using gradient descent. In ACM ICML, 2005.
  6. Learning to rank with nonsmooth cost functions. In NeurIPS, 2006.
  7. Learning to rank: From pairwise approach to listwise approach. In ACM ICML, 2007.
  8. Efficient neural matrix factorization without sampling for recommendation. ACM Transactions on Information Systems, 38(2):14, 1 2020a. ISSN 15582868. doi: 10.1145/3373807. URL https://doi.org/10.1145/3373807.
  9. Measuring and Relieving the Over-smoothing Problem for Graph Neural Networks from the Topological View. AAAI 2020 - 34th AAAI Conference on Artificial Intelligence, pp.Ā  3438–3445, 9 2019. ISSN 2159-5399. doi: 10.1609/aaai.v34i04.5747. URL https://arxiv.org/abs/1909.03211v2.
  10. Adap-$\tau$: Adaptively Modulating Embedding Magnitude for Recommendation. ACM Web Conference 2023 - Proceedings of the World Wide Web Conference, WWW 2023, 1:1085–1096, 2 2023. doi: 10.1145/3543507.3583363. URL http://arxiv.org/abs/2302.04775http://dx.doi.org/10.1145/3543507.3583363.
  11. Revisiting Graph based Collaborative Filtering: A Linear Residual Graph Convolutional Network Approach. AAAI 2020 - 34th AAAI Conference on Artificial Intelligence, pp.Ā  27–34, 1 2020b. ISSN 2159-5399. doi: 10.1609/aaai.v34i01.5330. URL https://arxiv.org/abs/2001.10167v1.
  12. Search engines: Information retrieval in practice, volume 520. Addison-Wesley Reading, 2010.
  13. Fast graph representation learning with PyTorch Geometric. In ICLR Workshop on Representation Learning on Graphs and Manifolds, 2019.
  14. Neural Message Passing for Quantum Chemistry. 4 2017. URL http://arxiv.org/abs/1704.01212.
  15. node2vec: Scalable Feature Learning for Networks. 7 2016. URL https://arxiv.org/abs/1607.00653.
  16. Dimensionality reduction by learning an invariant mapping. In IEEE CVPR, 2006.
  17. Inductive Representation Learning on Large Graphs. 6 2017. URL https://arxiv.org/abs/1706.02216.
  18. The Movielens datasets: History and context. ACM TIIS, 2015.
  19. Neural Collaborative Filtering. 8 2017. URL http://arxiv.org/abs/1708.05031.
  20. LightGCN: Simplifying and Powering Graph Convolution Network for Recommendation. 2 2020. URL https://arxiv.org/abs/2002.02126.
  21. Collaborative metric learning. 26th International World Wide Web Conference, WWW 2017, pp.Ā  193–201, 2017. doi: 10.1145/3038912.3052639.
  22. MixGCF: An Improved Training Method for Graph Neural Network-based Recommender Systems. Proceedings of the ACM SIGKDD International Conference on Knowledge Discovery and Data Mining, pp.Ā  665–674, 8 2021. doi: 10.1145/3447548.3467408.
  23. Cumulated gain-based evaluation of IR techniques. ACM TOIS, 2002.
  24. Semi-Supervised Classification with Graph Convolutional Networks. 9 2016. URL http://arxiv.org/abs/1609.02907.
  25. Joint distance and similarity measure learning based on triplet-based constraints. IS, 2017.
  26. Variational autoencoders for collaborative filtering. The Web Conference 2018 - Proceedings of the World Wide Web Conference, WWW 2018, pp.Ā  689–698, 2 2018. doi: 10.1145/3178876.3186150. URL https://arxiv.org/abs/1802.05814.
  27. SimpleX: A Simple and Strong Baseline for Collaborative Filtering. International Conference on Information and Knowledge Management, Proceedings, pp.Ā  1243–1252, 9 2021. doi: 10.1145/3459637.3482297. URL https://arxiv.org/abs/2109.12613v2.
  28. Exploring Data Splitting Strategies for the Evaluation of Recommendation Models. RecSys 2020, 7 2020. doi: 10.48550/arxiv.2007.13237. URL http://arxiv.org/abs/2007.13237.
  29. The PageRank Citation Ranking: Bringing Order to the Web. World Wide Web Internet And Web Information Systems, 54(1999-66), 1998. ISSN 1752-0509. doi: 10.1.1.31.1768.
  30. Recall@ k surrogate loss with large batches and similarity mixup. In IEEE/CVF CVPR, 2022.
  31. DeepWalk: Online Learning of Social Representations. 3 2014. doi: 10.1145/2623330.2623732. URL http://arxiv.org/abs/1403.6652http://dx.doi.org/10.1145/2623330.2623732.
  32. NeuralNDCG: Direct optimisation of a ranking metric via differentiable relaxation of sorting, 2021.
  33. A general approximation framework for direct optimization of information retrieval measures. Information Retrieval, 2009.
  34. Robust and decomposable average precision for image retrieval. NeurIPS, 2021.
  35. Hierarchical average precision training for pertinent image retrieval. In ECCV, 2022.
  36. BPR: Bayesian Personalized Ranking from Implicit Feedback. 5 2012. URL http://arxiv.org/abs/1205.2618.
  37. Learning with average precision: Training image retrieval with a listwise loss. In CVPR, 2019.
  38. TGN: Temporal Graph Networks for Deep Learning on Dynamic Graphs. ArXiv 2020, 6 2020. URL https://arxiv.org/abs/2006.10637.
  39. Neighbor Interaction Aware Graph Convolution Networks for Recommendation. SIGIR 2020 - Proceedings of the 43rd International ACM SIGIR Conference on Research and Development in Information Retrieval, pp.Ā  1289–1298, 7 2020. doi: 10.1145/3397271.3401123. URL https://dl.acm.org/doi/10.1145/3397271.3401123.
  40. LINE: Large-scale Information Network Embedding. WWW 2015 - Proceedings of the 24th International Conference on World Wide Web, pp.Ā  1067–1077, 3 2015. doi: 10.1145/2736277.2741093. URL http://arxiv.org/abs/1503.03578http://dx.doi.org/10.1145/2736277.2741093.
  41. Softrank: Optimizing non-smooth rank metrics. In WSDM, 2008.
  42. Graph Attention Networks. 10 2017. URL http://arxiv.org/abs/1710.10903.
  43. Neural Graph Collaborative Filtering. 5 2019. doi: 10.1145/3331184.3331267. URL https://arxiv.org/abs/1905.08108.
  44. Disentangled Graph Collaborative Filtering. SIGIR 2020, 7 2020. doi: 10.1145/3397271.3401137. URL https://arxiv.org/abs/2007.01764.
  45. Self-supervised Graph Learning for Recommendation. SIGIR 2021 - Proceedings of the 44th International ACM SIGIR Conference on Research and Development in Information Retrieval, pp.Ā  726–735, 10 2021. doi: 10.1145/3404835.3462862. URL https://arxiv.org/abs/2010.10783.
  46. How Powerful are Graph Neural Networks? 10 2018. URL https://arxiv.org/abs/1810.00826.
  47. Graph Convolutional Neural Networks for Web-Scale Recommender Systems (PinSage). 6 2018. doi: 10.1145/3219819.3219890. URL http://arxiv.org/abs/1806.01973http://dx.doi.org/10.1145/3219819.3219890.
  48. XSimGCL: Towards Extremely Simple Graph Contrastive Learning for Recommendation. 9 2022. URL https://arxiv.org/abs/2209.02544v4.
  49. A support vector method for optimizing average precision. In ACM SIGIR, 2007.
  50. Survey of user behaviors as implicit feedback. In 2010 International Conference on Computer, Mechatronics, Control and Electronic Engineering, volumeĀ 6, pp.Ā  345–348, 2010. doi: 10.1109/CMCE.2010.5609830.

Summary

No one has generated a summary of this paper yet.

Paper to Video (Beta)

No one has generated a video about this paper yet.

Whiteboard

No one has generated a whiteboard explanation for this paper yet.

Open Problems

We haven't generated a list of open problems mentioned in this paper yet.

Continue Learning

We haven't generated follow-up questions for this paper yet.

Collections

Sign up for free to add this paper to one or more collections.

Tweets

Sign up for free to view the 2 tweets with 2 likes about this paper.