Research Article
Open access
Published on 10 October 2024
Download pdf
Li,Y.;Ge,Q. (2024). Enhancing graph neural network performance through comprehensive transfer learning strategies. Advances in Engineering Innovation,12,1-6.
Export citation

Enhancing graph neural network performance through comprehensive transfer learning strategies

Yuening Li *,1, Qinwen Ge 2
  • 1 Vanderbilt University
  • 2 Vanderbilt University

* Author to whom correspondence should be addressed.

https://doi.org/10.54254/2977-3903/12/2024120

Abstract

Machine learning always requires a large amount of labeled data, and the test data may have a different distribution than the training data. Transfer learning has proven to be an essential method for solving this problem in many fields. However, achieving successful transfer in graph datasets remains challenging, as the pre-training datasets must be large enough and carefully selected. This research looks at the inherent challenges of data scarcity and the need for robust models to increase the versatility and efficiency of Graph neural networks (GNNs)in various implementation domains. By examining the performance between trained GNNs and non-pre-trained GNNs, which can further demonstrate the generalization of the pre-trained GNN strategy and the significance of transfer learning to graph data.

Keywords

Graph Neural Networks, Transfer Learning, Generalization, GNN Pre-Training, Negative Transfer

[1]. Joan Bruna, Wojciech Zaremba, Arthur Szlam, and Yann LeCun. 2014. Spectral Networks and Locally Connected Networks on Graphs. In ICLR.

[2]. Weihua Hu, Bowen Liu, Joseph Gomes, Marinka Zitnik, Percy Liang, Vijay Pande, and Jure Leskovec. 2020. Strategies for Pre-Training Graph Neural Networks. In ICLR.

[3]. Yunshu Du, Wojciech M Czarnecki, Siddhant M Jayakumar, Mehrdad Farajtabar, Razvan Pascanu, and Balaji Lakshminarayanan. 2018. Adapting auxiliary losses using gradient similarity.1812.02224. In arXiv.

[4]. Zhao Chen, Vijay Badrinarayanan, Chen-Yu Lee, and Andrew Rabinovich. 2018. Gradnorm: Gradient normalization for adaptive loss balancing in deep multitask networks. 794–803. In ICML.

[5]. Alexandra Chronopoulou, Christos Baziotis, and Alexandros Potamianos. 2019. An embarrassingly simple approach for transfer learning from pretrained language models. 2089–2095. In NAACL.

[6]. Yi Yang, Cui, Hejie., Yang, Carl. (2023). PTGB: Pre-Train Graph Neural Networks for Brain Network Analysis. In Proceedings of Machine Learning Research - Conference on Health, Inference, and Learning 2023, 1–19. In CHIL.

[7]. Jacob Devlin, Ming-Wei Chang, Kenton Lee, and Kristina Toutanova. 2019. BERT: Pre-training of Deep Bidirectional Transformers for Language Understanding. 4171–4186.In NAACL.

[8]. Nishai Kooverjee, Steven James, and Terence van Zyl. 2022. Investigating Transfer Learning in Graph Neural Networks. In International Conference on Learning Representations. In ICLR.

[9]. Zhenqin Wu, Bharath Ramsundar, Evan N Feinberg, Joseph Gomes, Caleb Geniesse, Aneesh S Pappu, Karl Leswing, and Vijay Pande. MoleculeNet: a benchmark for molecular machine learning. Chemical science, 9(2):513–530, 2018.

[10]. McCallum, A.K., Nigam, K., Rennie, J. et al. Automating the Construction of Internet Portals with Machine Learning. Information Retrieval 3, 127–163 (2000).

[11]. Sen, P., Namata, G., Bilgic, M., Getoor, L., Galligher, B., Eliassi-Rad, T. (2008). Collective Classification in Network Data. AI Magazine, 29(3), 93.

[12]. C. Lee Giles, Kurt D. Bollacker, and Steve Lawrence. 1998. CiteSeer: an automatic citation indexing system. In Proceedings of the third ACM conference on Digital libraries (DL ’98). Association for Computing Machinery, New York, NY, USA, 89–98.

[13]. Jaewon Yang and Jure Leskovec. 2012. Defining and evaluating network communities based on groundtruth. In Proceedings of the ACM SIGKDD Workshop on Mining Data Semantics (MDS ’12). Association for Computing Machinery, New York, NY, USA, Article 3, 1–8.

Cite this article

Li,Y.;Ge,Q. (2024). Enhancing graph neural network performance through comprehensive transfer learning strategies. Advances in Engineering Innovation,12,1-6.

Data availability

The datasets used and/or analyzed during the current study will be available from the authors upon reasonable request.

Disclaimer/Publisher's Note

The statements, opinions and data contained in all publications are solely those of the individual author(s) and contributor(s) and not of EWA Publishing and/or the editor(s). EWA Publishing and/or the editor(s) disclaim responsibility for any injury to people or property resulting from any ideas, methods, instructions or products referred to in the content.

About volume

Journal:Advances in Engineering Innovation

Volume number: Vol.12
ISSN:2977-3903(Print) / 2977-3911(Online)

© 2024 by the author(s). Licensee EWA Publishing, Oxford, UK. This article is an open access article distributed under the terms and conditions of the Creative Commons Attribution (CC BY) license. Authors who publish this series agree to the following terms:
1. Authors retain copyright and grant the series right of first publication with the work simultaneously licensed under a Creative Commons Attribution License that allows others to share the work with an acknowledgment of the work's authorship and initial publication in this series.
2. Authors are able to enter into separate, additional contractual arrangements for the non-exclusive distribution of the series's published version of the work (e.g., post it to an institutional repository or publish it in a book), with an acknowledgment of its initial publication in this series.
3. Authors are permitted and encouraged to post their work online (e.g., in institutional repositories or on their website) prior to and during the submission process, as it can lead to productive exchanges, as well as earlier and greater citation of published work (See Open access policy for details).