Evolutionary Algorithm-Based Soccer Robot Collaboration and Tactical Optimisation

Research Article
Open access

Evolutionary Algorithm-Based Soccer Robot Collaboration and Tactical Optimisation

Lulei Liu 1*
  • 1 Department of Computer Science and Electronic Engineering, University of Liverpool, Liverpool, United Kingdom    
  • *corresponding author Lulei.Liu23@student.xjtlu.edu.cn
Published on 22 October 2025 | https://doi.org/10.54254/2755-2721/2025.LD28256
ACE Vol.196
ISSN (Print): 2755-2721
ISSN (Online): 2755-273X
ISBN (Print): 978-1-80590-451-9
ISBN (Online): 978-1-80590-452-6

Abstract

Soccer robot collaboration is a key challenge in robotics research. Tactical optimisation is also crucial. Robots must perform real-time behaviours. These include zone defence, small-range positioning coordination and cluster attacks. The aim is to adapt to dynamic game environments. Traditional methods struggle with unpredictable changes. Evolutionary algorithms offer practical solutions. Genetic Algorithm (GA) simulates natural selection. It optimises multi-agent interactions and handles nonlinear problems. Particle Swarm Optimisation (PSO) draws inspiration from bird flocking. It enables dynamic role allocation. Ant Colony Optimisation (ACO) mimics ant foraging. It uses pheromone mechanisms. This allows quick adaptation to role changes. Hybrid evolutionary strategies combine different algorithms. They boost overall efficiency. This article reviews these algorithms in soccer robots. First, it introduces the modelling of collaboration problems. It also covers optimisation objectives. Second, it analyses core mechanisms of GA, PSO and ACO. Hybrid strategies are discussed as well. The advantages of these algorithms and techniques are evident from these analyses. Then, it highlights typical achievements—a path planning method which is based on the S-adaptive genetic algorithm. And the path planning method fuses Bezier curves. This generates smooth paths. Finally, it discusses hybrid strategy cases. For example, PSO combined with ACO optimises formations. It also handles role allocations. GA fused with ACO improves convergence speed.

Keywords:

Soccer Robots, Evolutionary Algorithms, Tactical Optimisation

Liu,L. (2025). Evolutionary Algorithm-Based Soccer Robot Collaboration and Tactical Optimisation. Applied and Computational Engineering,196,116-123.
Export citation

1. Introduction

Soccer robot collaboration and tactical optimisation are complex and difficult challenges in robotics research. Soccer games demand real-time behaviours from robots. These include zone defence, small-range positioning coordination and cluster attacks. They are a key indicator of how well the football machine can do its job. Robots must adapt to dynamic changes in the game environment. They need to switch between attack and defence. They schedule offensive commitments based on real-time positions. These tasks require high coordination and excellent adaptability. Rules design traditional control methods. They try to preset scripts to let the robot complete the tactics. Such methods fail against unpredictable changes in games. Robots may not respond promptly to opponents' offensive positioning strategies. This can lead to losing possession. Opponents may score more goals. Finally, the team lost the match.

Evolutionary algorithms provide a solution to this problem. Genetic algorithm (GA) is good at handling non-linear optimisation problems of multiple interacting components among multiple agents [1]. Particle Swarm Optimisation (PSO) adjusts each robot's role dynamically. Ant Colony Optimisation (ACO) allocates roles through pheromone mechanisms. Hybrid evolutionary strategies combine global evolution. This helps teams adapt quickly to field changes. It optimises team formations. Role allocations improve. Tactics enhance. Team collaboration efficiency rises [2,3].Evolutionary algorithms can continuously optimise their strategies in response to changes in the environment, deal with high-dimensional, non-linear search spaces, and generate complex behaviours that are difficult to design with traditional methods [4,5]. Evolutionary algorithms use global search. They discover innovative collaboration patterns. This boosts robots' competitiveness in adversarial environments.

The RoboCup project started in 1997. It aims to advance technology through robot soccer games [5]. Minor leagues feature fast-moving robots. Humanoid leagues focus on anthropomorphic designs. With the technology levels improving, recent years have seen fully autonomous robot soccer competitions [6].

This article reviews evolutionary algorithms. It focuses on their use in optimising soccer robot collaboration and tactics. First, the article introduces basic concepts and basic information. Next, it sequentially analyses and introduces GA, PSO, ACO and Hybrid strategies. Then, it shares typical application cases. Finally, it discusses the significance of research results and proposes future research directions.

2. Modeling of collaboration problems and optimization objectives

The core of the cooperation and tactical optimisation of football robots lies in decomposing the goals into quantifiable subtasks to achieve efficient teamwork. Among them, decomposition and distribution are similar to the role division of forwards, midfielders, and defenders in a team. The system breaks down the overall goal of "winning the match." It includes sub-tasks at different levels, such as formation selection, zonal defence, dribbling with the ball, passing cooperation, back post finishing, and dribbling while moving. Assignments are dynamic. They depend on robot positions. Performance factors matter. These include speed, Ball control accuracy and so on. Each robot focuses on specific tasks [6]. Tactical goal modelling is the process of translating game strategies into mathematical or computational models. For instance, optimising passing routes requires calculating the shortest or safest paths. Adjusting formations necessitates building spatial geometric models based on all positions, followed by dynamic adjustments to robot placements. Defensive strategies aim to maximise coverage of opponents' attack areas and delay their advances [7]. At its essence, collaboration optimisation is a multi-objective, multi-constrained complex optimisation problem. It requires balancing multiple performance indicators to find the optimal solution [8]. Long-distance running might be relatively safe, but energy consumption serves as a constraint that determines the robots' sustained operational capability. The win rate acts as the goal, integrating various factors. It is analogous to driving a car where you aim for speed (the objective), while considering fuel efficiency (a constraint) and safety (another constraint). These elements together constitute the foundational framework for collaboration optimisation [9].

3. Applications of evolutionary algorithms in collaboration optimisation

3.1. Genetic algorithm

The core idea of the Genetic Algorithm is to encode problem solutions as "chromosomes." Its workflow is a cyclic iterative process that simulates natural selection. Starting from an initial population, it optimises the population through continuous "breeding" and "elimination" to meet the conditions [10]. Genetic Algorithm excels at solving complex, nonlinear optimisation problems that involve multiple interactive components among various agents and are hard to address with traditional mathematical methods [1]. Initially, several possible solutions are generated randomly. Each solution is treated as an individual, encoded as a chromosome. Standard encodings include binary strings, real-number vectors, or symbolic sequences. The fitness value of each individual is calculated based on the problem's defined fitness function. Based on these fitness values, outstanding individuals are selected from the current population as "parents" to produce offspring. The selected parent individuals exchange parts of their genes—similar to parameters or features in solutions—to generate new individuals (offspring). To increase population diversity and avoid local optima, some genes are randomly altered through mutation. This iterative process repeats until the most satisfactory individual solution is found [11]. In the field of soccer robots, the Genetic Algorithm is applied to path planning, team collaboration, and tactical optimisation. For example, Chen and Gao proposed a path planning method based on the S-adaptive genetic algorithm [12]. In one study, Luke explored generating team strategies in soccer simulation games using a Genetic Algorithm [13]. Although Genetic Algorithm is widely used on its own in this domain, combining it with other AI technologies—such as hybrid models with deep reinforcement learning—can enhance real-time decision-making capabilities. This represents a future development trend.

3.2. Particle Swarm Optimization (PSO) and Ant Colony Optimization (ACO)

These algorithms are well-suited for soccer robots. They facilitate dynamic formation, evolution and role allocation. Below, the paper present the basic information and advantages of each algorithm.

3.2.1. PSO

PSO is a swarm intelligence optimisation algorithm inspired by bird flock behaviour. It is used to solve continuous nonlinear optimisation problems. The algorithm simulates particles—representing candidate solutions—as they "fly" through the search space. Each particle adjusts its velocity and position based on its own historical best position (pbest) and the group's global best position (gbest). This process optimises the objective function. First, the positions and velocities of the particle swarm are initialised. Then, the fitness of each particle is calculated, and pbest and gbest are updated. The speed update formula is:

vi,d=wvi,d+c1r1(pi,d-xi,d)+c2r2(gd-xi,d)(1)

Where inertinite,  c1 ​ and  c2 ​ are the cognitive and social acceleration coefficients, r1  and  r2  are random numbers, pi,d  is the particle's personal best, gd  ​ is the global best.

Incorporating parameters for pbest and gbest ensures the generation of random velocities. Positions are accumulated and recorded. The process repeats until termination conditions are met, such as reaching a maximum number of iterations or a convergence threshold. Through sharing the global best position, information exchange occurs, balancing exploration and exploitation to avoid local optima [14].In soccer robots, PSO is applied for dynamic role allocation. Each robot optimises its role in the team by learning from its own historical best position (pbest) and the best positions of neighbours (gbest). This approach allows the team to adjust strategies based on real-time changes in the match state. It supports role switching according to the current game situation, thereby improving team adaptability. Due to its distributed nature, PSO adapts well to robot teams of varying sizes [15].

3.2.2. ACO

ACO is an optimisation algorithm that simulates ant foraging behaviour. It is designed for solving combinatorial optimisation problems. The algorithm models ants communicating indirectly through pheromones in the environment to construct optimal paths. The probability of an ant choosing the next node is generated based on pheromone levels and other parameters. The probability formula is:

pxyk=τxyαηxyβzNτxzαηxzβ(2)

where  τxy ​   is the pheromone level,  ηxy  is the heuristic information (e.g. distance inverse), α and β are the weight parameters, and  N  is the feasible neighbourhood. Pheromones are updated via evaporation and deposition, with positive and negative feedback guiding the ants' progress.

Evaporation formula is:

τxy(1-ρ)τxy(3)

ρ is the evaporation rate.

First, pheromone levels are initialised. Then, ants build solutions, followed by local or global pheromone updates. The cycle repeats until termination conditions are satisfied [16]. In soccer robots, ACO is used for decentralised role allocation. Through pheromone trails and local fitness indicators, robots dynamically assign roles to ensure the team adapts quickly to on-field changes. Role reallocation is faster compared to PSO, with quicker responses to shifts in ball possession. The algorithm demonstrates higher stability and resistance to interference.

3.3. Hybrid evolutionary strategies

Hybrid Evolutionary Strategy is a common optimisation framework in the field of evolutionary computation. It combines traditional Evolutionary Strategy (ES) with other optimisation techniques or algorithms. The purpose is to improve search efficiency, convergence speed, and global optimisation capabilities. Common hybrid forms include integration with the Genetic Algorithm to create multi-evolutionary frameworks, which enhance population diversity. Incorporating differential evolution's mutation strategies achieves hybrid mutation, balancing exploration and exploitation. Combinations with PSO or adaptive covariance matrix evolution strategies are used for fuzzy system design or multi-objective problems.

4. Typical research achievements

Path Planning Method Based on S-Adaptive Genetic Algorithm.

S-Adaptive Genetic Algorithm (S-AGA) is a variant of the genetic algorithm. It is based on a sigmoid curve, known as an S-shaped function, and dynamically adjusts crossover probability (Pc) and mutation probability (Pm). The adjustment aims to accelerate convergence, avoid local optima, and optimise robot path length and safety. The method's core involves adaptively modifying parameters based on population fitness and generation count. For instance, Pc starts at 0.8 and gradually decreases to protect high-quality individuals, while Pm begins at 0.01 and increases to boost diversity. This adaptive mechanism makes the algorithm particularly suitable for path planning in complex environments, including those with dynamic obstacles in indoor or industrial settings [17]. A typical achievement is the fusion of S-AGA with continuous Bezier curves to generate smooth, collision-free paths. The parametric equation of Bezier curves is defined by parameter t (where t is in [0,1]) and control points P_i.

The mathematical formula is as follows:

B(t)=i=0n(ni)(1-t)n-itiPi(4)

This is a parametric curve used in computer graphics and numerical analysis for modelling smooth curves. It is defined by a set of control points that determine the curve's shape, though the curve does not necessarily pass through all control points except the starting and ending ones. The algorithm employs binary-encoded chromosomes to represent Bezier control points. The fitness function incorporates path length and adaptive penalty factors for safety distance, based on the minimum distance to obstacles. In a 20x20 grid environment with obstacle coverage of 15%-20.75%, simulation results indicate a minimum path length of 29.9416 units. This is approximately 1.27 units shorter than standard GA and 5.51 units shorter than ACO. Additionally, it reduces peak turns and mode switches (such as stop-rotate-restart), resulting in smoother paths and greater safety distances [18]. Current variants that fuse Bezier curves are applied in simple warehouse picking systems. They generate smooth paths, reduce mode switches, increase safety distances, and lower carbon emissions. Real-world warehouse tests have demonstrated effectiveness in dynamic obstacle avoidance. In the future, this approach holds promise for assisting soccer robots. The research achievements and application cases underscore the potential of S-AGA in enhancing robot autonomy and efficiency. Looking ahead, further integration with machine vision or reinforcement learning could handle more dynamic environments, better meeting the needs of soccer robots.

5. Typical cases of hybrid evolutionary strategies

5.1. PSO and ACO combined strategy

Soccer robots face challenges on the field, including dynamic environments, multi-robot collaboration, and real-time decision-making. They need to consider both global strategies and adapt to sudden changes, like obstacle avoidance. Hybrid evolutionary strategies can integrate global optimisation algorithms like PSO with local search algorithms like ACO. This integration not only achieves higher efficiency but also provides a more comprehensive refinement approach. This hybrid method maintains global optima while adapting rapidly to local variations. Global evolutionary algorithms explore broad strategy spaces to find near-global optimal solutions. Local search algorithms then refine these solutions to improve practical execution performance. PSO excels due to its distributed characteristics and adaptability. ACO performs better in terms of scoring rate, ball possession rate, and anti-interference capabilities. By combining the two methods, it reduces the drawbacks of single algorithms—such as slow convergence in evolutionary algorithms or insufficient global perspective in local search. In soccer robots, hybrid evolutionary strategies are used to optimise team formations, role allocations and further elevate the optimisation efficiency and adaptability of team strategies in dynamic and complex match environments.

5.2. Fusion of genetic algorithm and ant colony optimisation

A clever way to adapt is by blending Genetic Algorithms (GA) for broad exploration with Ant Colony Optimisation (ACO) for fine-tuning. Function parameters are dynamically balanced according to environmental complexity. The strategy shows faster convergence speed in high obstacle density maps experiments. Path lengths are superior to those from single GA or ACO, with higher success rates. During iteration, it optimised pheromone trails and mutations, highlighting how these hybrids team up for better results [19,20]. But it's not flawless; heavy computation could bog down in live games.

6. Conclusion

This paper has explored how evolutionary algorithms can deal with the difficulties of robotic soccer, especially in situations where the environment changes quickly and the players must cooperate in real time. Traditional planning methods are often too rigid or slow, while evolutionary algorithms are more flexible. Genetic Algorithms (GA) help solve optimisation problems that involve many variables. Particle Swarm Optimisation (PSO) allows roles to be switched more smoothly since particles share information about their best solutions (pbest and gbest). Ant Colony Optimisation (ACO) organises tasks using pheromone trails, which are a kind of positive feedback, and in many cases, it reacts faster to changes in possession compared with PSO. Hybrid methods are also important because they can combine global searching ability with local refinements. This usually improves both diversity of solutions and convergence speed, although it is not always guaranteed to work better in every situation.

Another critical point is collaborative modelling of the problem. The main objective, winning the game, can be divided into smaller sub-tasks such as path planning, adjusting the geometry of the team, and making trade-offs between multiple objectives. These tasks can then be written into mathematical models. PSO simulates the flights of particles and makes use of the best local and global positions to guide the process, which enables distributed role switching. ACO, in contrast, depends on pheromone updating and is more suitable for decentralised task allocation. Hybrid approaches, including those combined with Evolution Strategies (ES), help balance exploration and exploitation, although sometimes the parameter tuning can be complex.

In the future, it is expected that evolutionary algorithms will be combined with machine learning. Such integration might open new directions for generating strategies and supporting real-time decision-making in robotic soccer, though this still needs more research.

Typical research results include S-Adaptive Genetic Algorithms fusing Bezier curves for generating smooth collision-free paths that outperform standard GA and ACO in grid environments, reducing path lengths and inflexion points, with potential applications in football robots. Hybrid cases such as PSO-ACO combine to optimise formations and tactics, and GA-ACO fusion accelerates convergence and improves success rates in complex environments. These results highlight the potential of evolutionary algorithms to enhance autonomy and efficiency.

However, challenges remain: algorithm convergence speeds may be insufficient in hyper-real-time environments and are tend to local optima; high computational resource requirements affect the deployment of small robots; lack of deep integration with deep reinforcement learning or machine vision to fully handle uncertainties such as changes in adversary strategies or sensor noise; and multi-objective trade-offs need to be more finely modelled to balance win rates, energy consumption and safety.

Validation should focus on hybrid model innovation, such as combining evolutionary algorithms with deep learning to improve real-time decision-making and innovative behaviour generation; developing adaptive parameter adjustment mechanisms to cope with variable environments; exploring distributed computing frameworks to reduce resource consumption; optimising algorithm robustness through real RoboCup matches validation; moreover integrating multimodal perception to achieve more intelligent collaboration. These directions will further enhance the application value of evolutionary algorithms in adversarial, multi-agent systems.


References

[1]. Tomassino Ferrauto, D., Parisi, D., Di Stefano, G., Baldassarre, G. (2013). Different genetic algorithms and the evolution of specialisation: A study with groups of simulated neural robots. Artificial Life, 19(2), 221–253.

[2]. Nadiri, F., Rad, A. B. (2025) Swarm intelligence for collaborative play in humanoid soccer teams. Sensors, 25(11), 3496.

[3]. Kelner, V., Capitanescu, F., Léonard, O., Wehenkel, L. (2008) A hybrid optimization technique coupling an evolutionary and a local search algorithm. Journal of Computational and Applied Mathematics, 215(2), 448–456.

[4]. Pavlowsky, A., Alarcon, J. M. (2012) Interaction between long-term potentiation and depression in CA1 synapses: temporal constrains, functional compartmentalization and protein synthesis. PLoS One, 7(1), e29865.

[5]. Zhan, Z. H., Shi, L., Tan, K. C., et al. (2022) A survey on evolutionary computation for complex continuous optimization. Artificial Intelligence Review, 55(1), 59–110.

[6]. Kitano, H., Asada, M., Kuniyoshi, Y., et al. (1997) Robocup: The robot world cup initiative. In Proceedings of the First International Conference on Autonomous Agents, 340–347. ACM.

[7]. Doncieux, S., Bredeche, N., Mouret, J. B., et al. (2015) Evolutionary robotics: what, why, and where to. Frontiers in Robotics and AI, 2, 4.

[8]. Labiosa, A., Wang, Z., Agarwal, S., et al. (2024) Reinforcement learning within the classical robotics stack: A case study in robot soccer. arXiv preprint arXiv: 2412.09417.

[9]. Haarnoja, T., Moran, B., Lever, G., et al. (2024) Learning agile soccer skills for a bipedal robot with deep reinforcement learning. Science Robotics, 9(89), eadi8022.

[10]. Kim, T., Vecchietti, L. F., Choi, K., et al. (2021) Two-stage training algorithm for AI robot soccer. PeerJ Computer Science, 7, e718.

[11]. Reis, L. P. (2023) Coordination and machine learning in multi-robot systems: Applications in robotic soccer. arXiv preprint arXiv: 2312.16273.

[12]. Li, M., Kou, J., Lin, D., et al. (2002) Basic theory and application of genetic algorithm. Ke Xue Chu Ban She, Beijing.

[13]. Holland, J. H. (1992) Adaptation in natural and artificial systems: An introductory analysis with applications to biology, control, and artificial intelligence. MIT Press, Cambridge.

[14]. Chen, X., Gao, P. (2020) Path planning and control of soccer robot based on genetic algorithm. Journal of Ambient Intelligence and Humanized Computing, 11(12), 6177–6186.

[15]. Sudholt, D., Witt, C. (2008) Runtime analysis of binary PSO. In Proceedings of the 10th Annual Conference on Genetic and Evolutionary Computation, 135–142. ACM.

[16]. Kennedy, J., Eberhart, R. (1995) Particle swarm optimization. In Proceedings of ICNN’95 - International Conference on Neural Networks, 4, 1942–1948. IEEE.

[17]. Freitas, D., Lopes, L. G., Morgado-Dias, F. (2020) Particle swarm optimisation: a historical review up to the current developments. Entropy, 22(3), 362.

[18]. Dorigo, M., Stützle, T. (2018) Ant colony optimization: overview and recent advances. Handbook of Metaheuristics, 311–351. Springer.

[19]. Ma, J., Liu, Y., Zang, S., et al. (2020) Robot path planning based on genetic algorithm fused with continuous Bezier optimization. Computational Intelligence and Neuroscience, 2020(1), 9813040.

[20]. Jiqing, C., Shaorong, X., Hengyu, L., et al. (2015) Robot path planning based on adaptive integrating of genetic and ant colony algorithm. International Journal of Smart Home, 11(3), 833.


Cite this article

Liu,L. (2025). Evolutionary Algorithm-Based Soccer Robot Collaboration and Tactical Optimisation. Applied and Computational Engineering,196,116-123.

Data availability

The datasets used and/or analyzed during the current study will be available from the authors upon reasonable request.

Disclaimer/Publisher's Note

The statements, opinions and data contained in all publications are solely those of the individual author(s) and contributor(s) and not of EWA Publishing and/or the editor(s). EWA Publishing and/or the editor(s) disclaim responsibility for any injury to people or property resulting from any ideas, methods, instructions or products referred to in the content.

About volume

Volume title: Proceedings of CONF-MLA 2025 Symposium: Intelligent Systems and Automation: AI Models, IoT, and Robotic Algorithms

ISBN:978-1-80590-451-9(Print) / 978-1-80590-452-6(Online)
Editor:Hisham AbouGrad
Conference date: 12 November 2025
Series: Applied and Computational Engineering
Volume number: Vol.196
ISSN:2755-2721(Print) / 2755-273X(Online)

© 2024 by the author(s). Licensee EWA Publishing, Oxford, UK. This article is an open access article distributed under the terms and conditions of the Creative Commons Attribution (CC BY) license. Authors who publish this series agree to the following terms:
1. Authors retain copyright and grant the series right of first publication with the work simultaneously licensed under a Creative Commons Attribution License that allows others to share the work with an acknowledgment of the work's authorship and initial publication in this series.
2. Authors are able to enter into separate, additional contractual arrangements for the non-exclusive distribution of the series's published version of the work (e.g., post it to an institutional repository or publish it in a book), with an acknowledgment of its initial publication in this series.
3. Authors are permitted and encouraged to post their work online (e.g., in institutional repositories or on their website) prior to and during the submission process, as it can lead to productive exchanges, as well as earlier and greater citation of published work (See Open access policy for details).

References

[1]. Tomassino Ferrauto, D., Parisi, D., Di Stefano, G., Baldassarre, G. (2013). Different genetic algorithms and the evolution of specialisation: A study with groups of simulated neural robots. Artificial Life, 19(2), 221–253.

[2]. Nadiri, F., Rad, A. B. (2025) Swarm intelligence for collaborative play in humanoid soccer teams. Sensors, 25(11), 3496.

[3]. Kelner, V., Capitanescu, F., Léonard, O., Wehenkel, L. (2008) A hybrid optimization technique coupling an evolutionary and a local search algorithm. Journal of Computational and Applied Mathematics, 215(2), 448–456.

[4]. Pavlowsky, A., Alarcon, J. M. (2012) Interaction between long-term potentiation and depression in CA1 synapses: temporal constrains, functional compartmentalization and protein synthesis. PLoS One, 7(1), e29865.

[5]. Zhan, Z. H., Shi, L., Tan, K. C., et al. (2022) A survey on evolutionary computation for complex continuous optimization. Artificial Intelligence Review, 55(1), 59–110.

[6]. Kitano, H., Asada, M., Kuniyoshi, Y., et al. (1997) Robocup: The robot world cup initiative. In Proceedings of the First International Conference on Autonomous Agents, 340–347. ACM.

[7]. Doncieux, S., Bredeche, N., Mouret, J. B., et al. (2015) Evolutionary robotics: what, why, and where to. Frontiers in Robotics and AI, 2, 4.

[8]. Labiosa, A., Wang, Z., Agarwal, S., et al. (2024) Reinforcement learning within the classical robotics stack: A case study in robot soccer. arXiv preprint arXiv: 2412.09417.

[9]. Haarnoja, T., Moran, B., Lever, G., et al. (2024) Learning agile soccer skills for a bipedal robot with deep reinforcement learning. Science Robotics, 9(89), eadi8022.

[10]. Kim, T., Vecchietti, L. F., Choi, K., et al. (2021) Two-stage training algorithm for AI robot soccer. PeerJ Computer Science, 7, e718.

[11]. Reis, L. P. (2023) Coordination and machine learning in multi-robot systems: Applications in robotic soccer. arXiv preprint arXiv: 2312.16273.

[12]. Li, M., Kou, J., Lin, D., et al. (2002) Basic theory and application of genetic algorithm. Ke Xue Chu Ban She, Beijing.

[13]. Holland, J. H. (1992) Adaptation in natural and artificial systems: An introductory analysis with applications to biology, control, and artificial intelligence. MIT Press, Cambridge.

[14]. Chen, X., Gao, P. (2020) Path planning and control of soccer robot based on genetic algorithm. Journal of Ambient Intelligence and Humanized Computing, 11(12), 6177–6186.

[15]. Sudholt, D., Witt, C. (2008) Runtime analysis of binary PSO. In Proceedings of the 10th Annual Conference on Genetic and Evolutionary Computation, 135–142. ACM.

[16]. Kennedy, J., Eberhart, R. (1995) Particle swarm optimization. In Proceedings of ICNN’95 - International Conference on Neural Networks, 4, 1942–1948. IEEE.

[17]. Freitas, D., Lopes, L. G., Morgado-Dias, F. (2020) Particle swarm optimisation: a historical review up to the current developments. Entropy, 22(3), 362.

[18]. Dorigo, M., Stützle, T. (2018) Ant colony optimization: overview and recent advances. Handbook of Metaheuristics, 311–351. Springer.

[19]. Ma, J., Liu, Y., Zang, S., et al. (2020) Robot path planning based on genetic algorithm fused with continuous Bezier optimization. Computational Intelligence and Neuroscience, 2020(1), 9813040.

[20]. Jiqing, C., Shaorong, X., Hengyu, L., et al. (2015) Robot path planning based on adaptive integrating of genetic and ant colony algorithm. International Journal of Smart Home, 11(3), 833.