Research Article
Open access
Published on 7 February 2024
Download pdf
Ma,Y. (2024). Parallel programming: Driving the computational surge in AI. Applied and Computational Engineering,37,197-201.
Export citation

Parallel programming: Driving the computational surge in AI

Yumeng Ma *,1,
  • 1 University of Georgia

* Author to whom correspondence should be addressed.

https://doi.org/10.54254/2755-2721/37/20230506

Abstract

With the meteoric ascent and ongoing advancements in the realm of Artificial Intelligence (AI), there's an escalating demand for potent computational capabilities. Meeting the exacting computational demands of burgeoning AI technologies has become a pivotal topic in contemporary research. This study is anchored on the theme: "Impact of Parallel Programming on AI." The objective is to delve into the potentialities and value that parallel programming technology might introduce to the AI sector. To deeply probe the integration of parallel programming in AI, we've employed a meticulous research methodology. This process disentangles and scrutinizes the prevalent parallel programming techniques and their tangible applications within the AI sphere. Such a method offers a nuanced grasp of the fusion of parallel programming with AI and the distinct advantages that ensue. Findings reveal a substantial enhancement in the efficacy of AI models when they leverage parallel programming techniques. This is especially salient in scenarios involving large-scale data training and intricate model architectures. Most notably, parallel programming dramatically slashes AI training durations. This swift training paves the way for rapid iterations and fine-tuning. In conclusion, parallel programming emerges as a game-changer in the AI domain. It doesn't merely amplify AI performance but also lays a robust technical bedrock for AI's sustained and swift evolution. This opens up a vista of novel prospects and avenues for forthcoming exploration and application in AI.

Keywords

Parallel Programming, AI, Open MPI, Big Data Set

[1]. Hao, Z. (2020). Big Data and Artificial Intelligence Modeling for Drug Discovery. Annual Review of Pharmacology and Toxicology, 60, 573-589.

[2]. Dryden, N., Moon, T., Jacobs, S. A., & Van Essen, B. (2016). Communication quantization for data-parallel training of deep neural networks. In 2016 2nd Workshop on Machine Learning in HPC Environments (MLHPC) (pp. 1-8). IEEE.

[3]. Amiri, M. M., & Gündüz, D. (2020). Machine learning at the wireless edge: Distributed stochastic gradient descent over-the-air. IEEE Trans. Signal Process., 68, 2155-2169.

[4]. Zhu, G., Wang, Y., & Huang, K. (2020). Broadband analog aggregation for low-latency federated edge learning. IEEE Trans. Wireless Commun., 19(1), 491-506.

[5]. Amiri, M. M., & Gündüz, D. (2020). Federated learning over wireless fading channels. IEEE Trans. Wireless Commun., 19(5), 3546-3557.

[6]. Chen, M., Gündüz, D., Huang, K., Saad, W., Bennis, M., Feljan, A. V., & Poor, H. V. (2021). Distributed learning in wireless networks: Recent progress and future challenges. IEEE Journal on Selected Areas in Communications, 39(12), 3579-3605.

[7]. Alaparthi, S., & Mishra, M. (2020). Bidirectional Encoder Representations from Transformers (BERT): A sentiment analysis odyssey. arXiv preprint arXiv:2007.01127.

[8]. Smith, S., Patwary, M., Norick, B., LeGresley, P., Rajbhandari, S., Casper, J., ... & Catanzaro, B. (2022). Using deepspeed and megatron to train megatron-turing nlg 530b, a large-scale generative language model. Ar Xiv preprint arXiv:2201.11990.

[9]. Narayanan, D., Shoeybi, M., Casper, J., LeGresley, P., Patwary, M., Korthikanti, V., ... & Zaharia, M. (2021). Efficient large-scale language model training on gpu clusters using megatron-lm. In Proceedings of the International Conference for High Performance Computing, Networking, Storage and Analysis (pp. 1-15).

[10]. Kumar, K., Karn, N. K., & Awana, V. S. (2023). Synthesis of possible room temperature superconductor LK-99: Pb9Cu (PO4) 6O. Superconductor Science and Technology.

Cite this article

Ma,Y. (2024). Parallel programming: Driving the computational surge in AI. Applied and Computational Engineering,37,197-201.

Data availability

The datasets used and/or analyzed during the current study will be available from the authors upon reasonable request.

Disclaimer/Publisher's Note

The statements, opinions and data contained in all publications are solely those of the individual author(s) and contributor(s) and not of EWA Publishing and/or the editor(s). EWA Publishing and/or the editor(s) disclaim responsibility for any injury to people or property resulting from any ideas, methods, instructions or products referred to in the content.

About volume

Volume title: Proceedings of the 2023 International Conference on Machine Learning and Automation

Conference website: https://2023.confmla.org/
ISBN:978-1-83558-299-2(Print) / 978-1-83558-300-5(Online)
Conference date: 18 October 2023
Editor:Mustafa İSTANBULLU
Series: Applied and Computational Engineering
Volume number: Vol.37
ISSN:2755-2721(Print) / 2755-273X(Online)

© 2024 by the author(s). Licensee EWA Publishing, Oxford, UK. This article is an open access article distributed under the terms and conditions of the Creative Commons Attribution (CC BY) license. Authors who publish this series agree to the following terms:
1. Authors retain copyright and grant the series right of first publication with the work simultaneously licensed under a Creative Commons Attribution License that allows others to share the work with an acknowledgment of the work's authorship and initial publication in this series.
2. Authors are able to enter into separate, additional contractual arrangements for the non-exclusive distribution of the series's published version of the work (e.g., post it to an institutional repository or publish it in a book), with an acknowledgment of its initial publication in this series.
3. Authors are permitted and encouraged to post their work online (e.g., in institutional repositories or on their website) prior to and during the submission process, as it can lead to productive exchanges, as well as earlier and greater citation of published work (See Open access policy for details).