Research Article
Open access
Published on 26 November 2024
Download pdf
Dong,K. (2024). Large Language Model Applied in Multi-agent System—A Survey. Applied and Computational Engineering,109,9-16.
Export citation

Large Language Model Applied in Multi-agent System—A Survey

Kaiwen Dong *,1,
  • 1 Lancaster University Business School

* Author to whom correspondence should be addressed.

https://doi.org/10.54254/2755-2721/109/20241330

Abstract

The application of large language models (LLMs) in single-agent systems within complex environments has proven successful, prompting a growing interest in their use within multi-agent systems (MAS). Despite the impressive capabilities of LLMs, it remains unclear how they can be optimally integrated and utilized to empower agents in MAS. Understanding how to effectively leverage the advantages of LLMs to enhance agent performance is crucial. This survey provides a comprehensive overview of the application of LLMs in MAS, focusing on their impact on agent cooperation, reasoning, and adaptive abilities. Finally, we discuss future directions and open questions in this evolving field.

Keywords

Multi-agent system, large language model, reinforcement learning.

[1]. Talebirad, Y. and Nadiri, A. Multi-agent collaboration: Harnessing the power of intelligent llm agents. arXiv preprint arXiv:2306.03314, 2023.

[2]. Zhang, J., Xu, X., and Deng, S. Exploring collaboration mechanisms for llm agents: A social psychology view, 2023a.

[3]. Park, J. S., O’Brien, J., Cai, C. J., Morris, M. R., Liang, P., and Bernstein, M. S. Generative agents: Interactive simulacra of human behavior. In Proceedings of the 36th Annual ACM Symposium on User Interface Software and Technology, pp. 1–22, 2023.

[4]. Li, G., Hammoud, H. A. A. K., Itani, H., Khizbullin, D., and Ghanem, B. Camel: Communicative agents for” mind” exploration of large-scale language model society. arXiv preprint arXiv:2303.17760, 2023.

[5]. Jinxin, S., Jiabao, Z., Yilei, W., Xingjiao, W., Jiawen, L., and Liang, H. Cgmi: Configurable general multi-agent interaction framework. arXiv preprint arXiv:2308.12503, 2023.

[6]. Shanshan, H., Qifan, Z., Yuhang, Y., Weizhao, J., Zhaozhou, X., and Chaoyang, H LLMMulti-Agent Systems: Challenges and Open Problems. arXiv preprint arXiv: /2402.03578v1

[7]. Durante, Z., Sarkar, B., Gong, R., Taori, R., Noda, Y., Tang, P., Adeli, E., Lakshmikanth, S, K., Schulman, K., Milstein, A., Terzopoulos, D., Famoti, A., Kuno, N., Llorens, A., Vo, H., Ikeuchi, K., Fei-Fei, L., Gao, J., Wake, N., Huang, Q. An Interactive Agent Foundation Model. arXiv preprint arXiv: 2402.05929.

[8]. Nascimento, N., Alencar, P., Cowan, P. Self-Adaptive Large Language Model (LLM)-Based Multiagent Systems. arXiv preprint arXiv:2307.06187, 2023.

[9]. Du, Y., Li, S., Torralba, A., Tenenbaum, J. B., and Mor datch, I. Improving factuality and reasoning in lan guage models through multiagent debate. arXiv preprint arXiv:2305.14325, 2023.

[10]. Liang, T., He, Z., Jiao, W., Wang, X., Wang, Y., Wang, R., Yang, Y., Tu, Z., and Shi, S. Encouraging divergent thinking in large language models through multi-agent debate. arXiv preprint arXiv:2305.19118, 2023.

[11]. Jung H Lee, Henry J Kvinge, Scott Howland, Zachary New, John Buckheit, Lauren A. Phillips, Elliott Skomski, Jessica Hibler, Courtney D. Corley, Nathan O. Hodas. Adaptive Transfer Learning: a simple but effective transfer learning. arXiv preprint arXiv: 2111.10937.

[12]. Siyu Yuan, Kaitao Song, Jiangjie Chen, Xu Tan, Dongsheng Li, Deqing Yang. EVOAGENT: Towards Automatic Multi-Agent Generation via Evolutionary Algorithms. arXiv preprint arXiv: 2406.14228v1.

[13]. Weize Chen, Yusheng Su, Jingwei Zuo, Cheng Yang, Chenfei Yuan, Chi-Min Chan, Heyang Yu, Yaxi Lu, Yi-Hsin Hung, Chen Qian, Yujia Qin, Xin Cong, Ruobing Xie, Zhiyuan Liu, Maosong Sun, and Jie Zhou. Agentverse: Facilitating multi-agent collaboration and exploring emergent behaviors. In The Twelfth International Conference on Learning Representations, 2024.

[14]. Guangyao Chen, Siwei Dong, Yu Shu, Ge Zhang, Jaward Sesay, Börje F Karlsson, Jie Fu, and Yemin Shi. Autoagents: A framework for automatic agent generation. arXiv preprint arXiv:2309.17288, 2023.

[15]. Michal Kosinski. 2023. Theory of mind may have spon taneously emerged in large language models. arXiv preprint arXiv:2302.02083.

[16]. Shima Rahimi Moghaddam and Christopher J Honey. 2023. Boosting theory-of-mind performance in large language models via prompting. arXiv preprint arXiv:2304.11490.

[17]. Tomer D. Ullman. Large Language Models Fail on Trivial Alterations to Theory-of-Mind Tasks. 2023. arXiv preprint arXiv: 2302.08399v5

[18]. Qiao, Jie et al. A Lancet Commission on 70 years of women's reproductive, maternal, newborn, child, and adolescent health in China. The Lancet, Volume 397, Issue 10293, 2497 – 2536.

[19]. Chan Hee Song, Jiaman Wu, Clayton Washington, Brian M Sadler, Wei-Lun Chao, and Yu Su. Llm-planner: Few-shot grounded planning for embodied agents with large language models. In Proceedings of the IEEE/CVF International Conference on Computer Vision, pages 2998–3009, 2023.

[20]. Bhargavi Paranjape, Scott Lundberg, Sameer Singh, Hannaneh Hajishirzi, Luke Zettlemoyer, Marco Tulio Ribeiro. ART: Automatic multi-step reasoning and tool-use for large language models. arXiv preprint arXiv: 2303.09014.

[21]. Yangyang Yu, Zhiyuan Yao, Haohang Li, Zhiyang Deng1 Yupeng Cao, Zhi Chen, Jordan W. Suchow, Rong Liu, Zhenyu Cui, Denghui Zhang, Zhaozhuo Xu, Koduvayur Subbalakshmi, Guojun Xiong, Yueru He, Jimin Huang, Dong Li, Qianqian Xie. 2024. FINCON: ASynthesized LLM Multi-Agent System with Conceptual Verbal Reinforcement for Enhanced Financial Decision Making. arXiv preprint arXiv: 2407.06567.

[22]. Aakanksha Chowdhery, Sharan Narang, Jacob Devlin, Maarten Bosma, Gaurav Mishra, Adam Roberts, Paul Barham, Hyung Won Chung, Charles Sutton, Sebastian Gehrmann, et al. 2022. Palm: Scaling language modeling with pathways. arXiv preprint arXiv:2204.02311.

[23]. Romal Thoppilan, Daniel De Freitas, Jamie Hall, Noam Shazeer, Apoorv Kulshreshtha, Heng-Tze Cheng, Alicia Jin, Taylor Bos, Leslie Baker, Yu Du, et al. 2022. Lamda: Language models for dialog applications. arXiv preprint arXiv:2201.08239.

[24]. Sébastien Bubeck, Varun Chandrasekaran, Ronen El dan, Johannes Gehrke, Eric Horvitz, Ece Kamar, Peter Lee, Yin Tat Lee, Yuanzhi Li, Scott Lund berg, et al. 2023. Sparks of artificial general intelli gence: Early experiments with gpt-4. arXiv preprint arXiv:2303.12712.

[25]. Jason Wei, Xuezhi Wang, Dale Schuurmans, Maarten Bosma, Ed Chi, Quoc Le, and Denny Zhou. 2022. Chain of thought prompting elicits reasoning in large language models. arXiv preprint arXiv:2201.11903.

[26]. Wei Liu, Chenxi Wang, YifeiWang, ZihaoXie, RennaiQiu, YufanDang, Zhuoyun Du, WeizeChen, ChengYang, ChenQian. Autonomous Agents for Collaborative Task under Information Asymmetry. arXiv preprint arXiv: 2406.14928v1

[27]. Michael Ahn, Anthony Brohan, Noah Brown, Yevgen Chebotar, Omar Cortes, Byron David, Chelsea Finn, Chuyuan Fu, Keerthana Gopalakrishnan, Karol Haus man, et al. 2022. Do as i can, not as i say: Ground ing language in robotic affordances. arXiv preprint arXiv:2204.01691.

[28]. Wenlong Huang, Pieter Abbeel, Deepak Pathak, and Igor Mordatch. 2022. Language models as zero-shot planners: Extracting actionable knowledge for em bodied agents. In International Conference on Ma chine Learning, pages 9118–9147. PMLR.

[29]. Balaji, P.G., Srinivasan, D. (2010). An Introduction to Multi-Agent Systems. In: Srinivasan, D., Jain, L.C. (eds) Innovations in Multi-Agent Systems and Applications - 1. Studies in Computational Intelligence, vol 310. Springer, Berlin, Heidelberg. https://doi.org/10.1007/978-3-642-14435-6_1

[30]. Long, Qian, Zihan Zhou, Abhibav Gupta, Fei Fang, Yi Wu, and Xiaolong Wang. "Evolutionary population curriculum for scaling multi-agent reinforcement learning." arXiv preprint arXiv:2003.10423 (2020).

[31]. Long, Qian, Fangwei Zhong, Mingdong Wu, Yizhou Wang and Song-Chun Zhu. “SocialGFs: Learning Social Gradient Fields for Multi-Agent Reinforcement Learning.” ArXiv abs/2405.01839 (2024): n. pag.

[32]. Nathalia Nascimento, Paulo Alencar, Donald Cowan. Self-Adaptive Large Language Model (LLM)-Based Multiagent Systems. arXiv preprint arXiv: 2307.06187v1.

[33]. Min Meng, Gaoxi Xiao, Beibei Li. Adaptive consensus for heterogeneous multi-agent systems under sensor and actuator attacks. https://www.sciencedirect.com/science/article/abs/pii/S0005109820304404

[34]. Huao Li, Yu Quan Chong, Simon Stepputtis, Joseph Campbell, Dana Hughes, Michael Lewis, Katia Sycara. 2023. Theory of Mind for Multi-Agent Collaboration via Large Language Models. arXiv preprint arXiv: 2310.10701.

[35]. Tian Liang, Zhiwei He, Wenxiang Jiao, Xing Wang, Rui Wang, Yujiu Yang, Shuming Shi, Zhaopeng Tu. Encouraging Divergent Thinking in Large Language Models through Multi-Agent Debate. 2023. arXiv preprint arXiv: 2305.19118.

[36]. Yilun Du, Shuang Li, Antonio Torralba, Joshua B. Tenenbaum, Igor Mordatch. Improving Factuality and Reasoning in Language Models through Multiagent Debate. arXiv preprint arXiv: 2305.14325v1

[37]. Chuanneng Sun, Songjun Huang, Dario Pompili. LLM-based Multi-Agent Reinforcement Learning: Current and Future Directions. arXiv preprint arXiv: 2405.11106v1.

Cite this article

Dong,K. (2024). Large Language Model Applied in Multi-agent System—A Survey. Applied and Computational Engineering,109,9-16.

Data availability

The datasets used and/or analyzed during the current study will be available from the authors upon reasonable request.

Disclaimer/Publisher's Note

The statements, opinions and data contained in all publications are solely those of the individual author(s) and contributor(s) and not of EWA Publishing and/or the editor(s). EWA Publishing and/or the editor(s) disclaim responsibility for any injury to people or property resulting from any ideas, methods, instructions or products referred to in the content.

About volume

Volume title: Proceedings of the 2nd International Conference on Machine Learning and Automation

Conference website: https://2024.confmla.org/
ISBN:978-1-83558-737-9(Print) / 978-1-83558-738-6(Online)
Conference date: 21 November 2024
Editor:Mustafa ISTANBULLU
Series: Applied and Computational Engineering
Volume number: Vol.109
ISSN:2755-2721(Print) / 2755-273X(Online)

© 2024 by the author(s). Licensee EWA Publishing, Oxford, UK. This article is an open access article distributed under the terms and conditions of the Creative Commons Attribution (CC BY) license. Authors who publish this series agree to the following terms:
1. Authors retain copyright and grant the series right of first publication with the work simultaneously licensed under a Creative Commons Attribution License that allows others to share the work with an acknowledgment of the work's authorship and initial publication in this series.
2. Authors are able to enter into separate, additional contractual arrangements for the non-exclusive distribution of the series's published version of the work (e.g., post it to an institutional repository or publish it in a book), with an acknowledgment of its initial publication in this series.
3. Authors are permitted and encouraged to post their work online (e.g., in institutional repositories or on their website) prior to and during the submission process, as it can lead to productive exchanges, as well as earlier and greater citation of published work (See Open access policy for details).