Research Article
Open access
Published on 23 October 2023
Download pdf
Geng,D.;Wang,D. (2023). Approaches on improving privacy and communication efficiency of FedFTG. Applied and Computational Engineering,19,18-27.
Export citation

Approaches on improving privacy and communication efficiency of FedFTG

Dongjun Geng *,1, Dingxiang Wang 2
  • 1 Beijing Jiaotong University
  • 2 Shenyang Institute of Engineering

* Author to whom correspondence should be addressed.

https://doi.org/10.54254/2755-2721/19/20231001

Abstract

The FedFTG plug-in can effectively solve the problem of knowledge forgetting caused by the server-side direct aggregation model in Federated Learning. But FedFTG runs the risk of compromising customer privacy, as well as additional transmission costs. Therefore, this paper introduces methods to enhance the privacy and communication efficiency of FedFTG, they are: Mixing Neural Network Layers method which can avoid various kinds of inference attack, Practical Secure Aggregation strategy which uses cryptography to encrypt transmitted data; The Federated Dropout model which focuses on reducing the downward communication pressure, and the Deep Gradient Compression method that can substantially compress the gradient. Experimental results show that, MixNN can ensure the privacy protection without affecting the accuracy of the model; Practical Secure Aggregation saves the communication cost when dealing with large data vector size while protecting the privacy; Federated Dropout reduces communication consumption by up to 28×; DGC can compress the gradient by 600× while maintaining the same accuracy. Therefore, if these methods are used in FedFTG, its privacy and communication efficiency will be greatly improved, which will make distributed training more secure and convenient for users, and also make it easier to realize joint learning training on mobile devices.

Keywords

FedFTG, privacy, communication efficiency, federated learning

[1]. Z. Lin, S. Li, D. Liang, T. Dacheng, and D. Ling-Yu, “Fine-tuning Global Model via Data-Free Knowledge Distillation for Non-IID Federated Learning,” Mar. 2022.

[2]. Z. Zhen, Z. Wu, L. Feng, W. Li, F. Qi, and S. Guo, “A Secure and Effective Energy-Aware Fixed-Point Quantization Scheme for Asynchronous Federated Learning,” Computers, Materials & Continua, vol. 75, no. 2, pp. 2939–2955, 2023.

[3]. S. Ji, J. Zhang, Y. Zhang, Z. Han, and C. Ma, “LAFED: A lightweight authentication mechanism for blockchain-enabled federated learning system,” Future Generation Computer Systems, vol. 145, no. 1, pp. 56–67, Aug. 2023.

[4]. L. Wang, X. Zhao, Z. Lu, L. Wang, and S. Zhang, “Enhancing Privacy Preservation and Trustworthiness for Decentralized Federated Learning,” Information Sciences, vol. 12, no. 21, Feb. 2023.

[5]. H. Gao, N. He, and T. Gao, “SVeriFL: Successive Verifiable Federated Learning with Privacy-Preserving,” Information Sciences, vol. 3, no. 17, Dec. 2022.

[6]. J. Liu et al., “From distributed machine learning to federated learning: a survey,” Knowledge and Information Systems, vol. 64, no. 4, pp. 885–917, Mar. 2022.

[7]. V. Rey, P. M. Sánchez Sánchez, A. Huertas Celdrán, and G. Bovet, “Federated learning for malware detection in IoT devices,” Computer Networks, vol. 204, no. 5, p. 108693, Feb. 2022.

[8]. C. Liu, H. Chen, Y. Wu, and R. Jin, “MixNN: A Design for Protecting Deep Learning Models,” Sensors, vol. 22, no. 21, p. 8254, Oct. 2022.

[9]. T. Lebrun, A. Boutet, J. Aalmoes, and A. Baud, “MixNN,” Proceedings of the 23rd ACM/IFIP International Middleware Conference, Nov. 2022.

[10]. M. Naseri, J. Hayes, and E. D. Cristofaro, “Toward robustness and privacy in federated learning: Experimenting with local and central differential privacy. ,” 2021.

[11]. L. Zhu, Z. Liu, and S. Han, “Deep Leakage from Gradients,” 2019.

[12]. K. Bonawitz et al., “Practical Secure Aggregation for Privacy-Preserving Machine Learning,” Proceedings of the 2017 ACM SIGSAC Conference on Computer and Communications Security, Oct. 2017.

[13]. S. Caldas, J. Koneˇcn`y, H. B. McMahan, and A. Talwalkar, “Expanding the Reach of Federated Learning by Reducing Client Resource Requirements,”, 2018.

[14]. N. Srivastava, G. E. Hinton, A. Krizhevsky, I. Sutskever, and R. Salakhutdinov, “Dropout: a simple way to prevent neural networks from overfitting,” J. Mach. Learn. Res., pp. 1929–1958, 2014.

[15]. J. Koneˇcný, H. B. McMahan, F. X. Yu, P. Richtárik, and D. Bacon, “Federated learning: Strategies for improving communication efficiency,”, 2016.

[16]. I. Mitliagkas, C. Zhang, S. Hadjis and C. Ré, "Asynchrony begets momentum, with an application to deep learning," 2016 54th Annual Allerton Conference on Communication, Control, and Computing (Allerton), Monticello, IL, USA, 2016, pp. 997-1004.

Cite this article

Geng,D.;Wang,D. (2023). Approaches on improving privacy and communication efficiency of FedFTG. Applied and Computational Engineering,19,18-27.

Data availability

The datasets used and/or analyzed during the current study will be available from the authors upon reasonable request.

Disclaimer/Publisher's Note

The statements, opinions and data contained in all publications are solely those of the individual author(s) and contributor(s) and not of EWA Publishing and/or the editor(s). EWA Publishing and/or the editor(s) disclaim responsibility for any injury to people or property resulting from any ideas, methods, instructions or products referred to in the content.

About volume

Volume title: Proceedings of the 5th International Conference on Computing and Data Science

Conference website: https://2023.confcds.org/
ISBN:978-1-83558-029-5(Print) / 978-1-83558-030-1(Online)
Conference date: 14 July 2023
Editor:Roman Bauer, Marwan Omar, Alan Wang
Series: Applied and Computational Engineering
Volume number: Vol.19
ISSN:2755-2721(Print) / 2755-273X(Online)

© 2024 by the author(s). Licensee EWA Publishing, Oxford, UK. This article is an open access article distributed under the terms and conditions of the Creative Commons Attribution (CC BY) license. Authors who publish this series agree to the following terms:
1. Authors retain copyright and grant the series right of first publication with the work simultaneously licensed under a Creative Commons Attribution License that allows others to share the work with an acknowledgment of the work's authorship and initial publication in this series.
2. Authors are able to enter into separate, additional contractual arrangements for the non-exclusive distribution of the series's published version of the work (e.g., post it to an institutional repository or publish it in a book), with an acknowledgment of its initial publication in this series.
3. Authors are permitted and encouraged to post their work online (e.g., in institutional repositories or on their website) prior to and during the submission process, as it can lead to productive exchanges, as well as earlier and greater citation of published work (See Open access policy for details).