Volume 16 Issue 5
Published on May 2025With the rapid development of AI technology, the field of rehabilitation therapy has ushered in unprecedented opportunities for innovation. This paper provides a comprehensive review of the current applications of AI in rehabilitation therapy and the challenges it faces, while also exploring its future development trends. The research finds that the application of AI technology in rehabilitation therapy has significantly improved rehabilitation efficiency and patients’ quality of life. AI can develop personalized rehabilitation plans based on individual patient conditions, achieve precise assessments and training through smart assistive devices, and break through the limitations of time and space with remote rehabilitation services. However, the application of AI in rehabilitation therapy still faces several challenges, including high technological costs, data privacy concerns, and public acceptance. Looking forward, as technologies such as 5G, the Internet of Things, and brain-machine interfaces deeply integrate with AI, rehabilitation medicine is expected to move toward a new stage of greater precision and intelligence.
The Poyang Lake Hydraulic Project is a major water conservancy initiative proposed by Jiangxi Province to address several challenges, including hydrological imbalance during the lake’s dry season, water scarcity, and reduced navigability. In recent years, due to the weakening backwater effect from the Yangtze River mainstream, the onset of the dry season in Poyang Lake has advanced, with significantly lower water levels, resulting in ecological degradation, intensified water supply conflicts, and navigation obstructions in the lake region. To regulate the relationship between the river and the lake, enhance water resource carrying capacity during the dry season, and ensure integrated benefits such as water supply, irrigation, and navigation, the Poyang Lake Hydraulic Project was constructed. However, the reservoir’s water storage and regulation operations directly influence the outflow, thereby affecting local hydrodynamic conditions at the confluence section of the Yangtze River mainstream. Using a two-dimensional hydrodynamic model, this paper analyzes the impact of the project under unfavorable operating scenarios by examining changes in distributary flow ratios, mainstream belt shifts, and localized flow field variations in relation to the dispatching strategies of the hydraulic project. Model test results indicate that following the construction and operation of the project, the distributary flow ratio at the Zhangjiazhou reach of the river undergoes only minor changes, with an adjustment range within 1.3%, and there is no significant impact on mainstream position or flow velocity. Consequently, the project does not pose any substantial adverse effect on navigability. These findings provide technical support for further research into the impacts on waterways and navigation conditions.
With the rapid advancement of voice recognition and artificial intelligence technologies, voice-controlled robots are increasingly utilized in human-computer interaction. However, the issue of response delay during the voice control process has become a critical bottleneck affecting user experience and system performance. This paper systematically analyzes the causes and characteristics of response delays in voice-controlled robots from two key dimensions: the speed of sound propagation in air and the internal circuit processing time of the robot. Through an examination of representative cases (such as smart speakers, surgical robots, and industrial control systems), this study reveals the relative contributions of physical and technological factors to total response time across different application scenarios and evaluates their impact on user perception and interaction efficiency. The findings indicate that in short-range scenarios, speech recognition processing is the primary bottleneck, while in long-range and high real-time scenarios, sound propagation delay also becomes a significant factor.
With the continuous improvement of global environmental protection requirements, people's attention to new energy vehicles is also increasing. As an important alternative to traditional fuel vehicles, one of the core technologies of new energy vehicles is the power battery system. It is crucial to test and evaluate the power battery system to ensure the safety, reliability, and performance of new energy vehicles. The objective of this study is to construct a testing system for evaluating the power battery system of new energy vehicles. Firstly, key indicators for testing power battery systems were determined through literature review and research, including battery capacity, charge and discharge performance, cycle life, and temperature characteristics. Establish corresponding testing methods and standards for these testing indicators. At the same time, cycle life testing is also conducted to simulate the long-term stability of the battery system in actual use. Through experiments, it has been proven that the proposed testing and evaluation system is feasible and effective. The experimental results show that the testing based on this evaluation system can accurately evaluate the performance of the power battery system and provide reference for the research and production of new energy vehicles.
In the context of “carbon neutral” and “carbon peak”, environmental pollution problems are becoming more and more serious, compared with traditional fuel vehicles, new energy vehicles by virtue of green pollution-free, and will not consume a lot of natural resources and other advantages, so that their ownership of rapid growth. The power source of new energy vehicles is mainly power batteries, so if the battery technology of new energy vehicles is developed, then new energy vehicles are bound to develop and grow. However, due to the imperfect power battery technology, the safety accidents of new energy vehicles caused by power batteries are becoming more prominent. Abnormal detection of the power battery and safety warning in advance can find potential safety problems of the power battery as early as possible.
This study focuses on the process validation and performance evaluation of 3D-printed continuous fiber-reinforced thermoplastic composite corrugated sandwich structures. It explores their mechanical behavior under different structural designs and manufacturing process characteristics. Through real-time impregnation technology, continuous fibers were combined with thermoplastic resin to print both arc-shaped and trapezoidal corrugated sandwich structures, and their compressive strength was tested and analyzed through experimental testing. The results show that the trapezoidal sandwich structure exhibits higher compressive strength under compression loads, with a peak compressive strength value of 9.11Mpa. In contrast, the arc-shaped sandwich structure has a value of 4.76Mpa. Under the same conditions, the trapezoidal structure demonstrates superior load distribution capability and higher stiffness with lower deformation. The experiment investigated the impact of process parameters and fiber-resin bonding quality on structural performance in real-time impregnation technology, indicating that this process can effectively optimize material properties and offer good manufacturing flexibility.
With the development of AI technology, agricultural cultivation has become increasingly efficient with the support of the Internet of Things (IoT) and machine learning. The importance of artificial intelligence technology in improving planting efficiency and seedling survival rate is becoming increasingly prominent, and it has become a research hotspot. This paper introduces the research scope of machine learning, discusses the application scenarios of Internet of Things, target detection and big data analysis in agriculture, compares the advantages and disadvantages of smart agriculture and traditional agriculture, and summarizes the relevant research results. At the same time, this paper proposes an artificial intelligence agricultural planting scheme that integrates the Internet of Things and computer vision technology, in order to improve the level of agricultural intelligence and promote agricultural production to achieve high-quality and efficient development.
This paper proposes a closed-loop human-machine co-creation process suitable for the early stages of industrial design. By integrating the Stable Diffusion model with the LoRA fine-tuning strategy, and constructing an image quality evaluation mechanism based on the dual metrics of CLIP and CMMD, the system guides designers in filtering and providing feedback on generated outputs to iteratively optimize prompts. The system integrates automatic scoring, manual filtering, and keyword clustering recommendation to form a collaborative closed loop of “generation—selection—optimization.” In a desk lamp design task, experiments demonstrate that this process significantly enhances the consistency of image styles and the quality of creative expression. The study verifies the feasibility of the human-machine collaboration mechanism in complex design tasks and offers a new paradigm for the application of generative AI in industrial product design.
With the increasing prevalence of underwater exploration and tourism, many companies are striving to achieve precise localization of submarines to prevent incidents of loss. This paper proposes a solution to address this issue. First, a dynamic analysis of the missing submarine is conducted. Taking into account the forces acting on the submarine—buoyancy, gravity, Coriolis force, and viscous drag caused by relative motion with seawater—the model calculates changes in the submarine's velocity and position. The model also considers density variations due to changes in seawater temperature and salinity at different depths, as well as variations in viscous drag caused by differences in seawater flow speed and direction. Based on the last transmitted coordinates, depth, water temperature, the submarine's own motion state, and the surrounding seawater current speed, the model can effectively predict the submarine's trajectory and final location.
Automatic modulation recognition plays a critical role in both civilian and military communication systems. While traditional approaches rely on manual feature extraction with limited accuracy, deep learning methods offer promising alternatives for this pattern recognition task. This paper presents a systematic performance evaluation of classical deep learning models for automatic modulation classification, aiming to establish baseline references for future research. Through comparative experiments using the RadioML2018.01a dataset containing 24 modulation types across SNR levels from -20dB to 20dB, we demonstrate that modulation signals exhibit multidimensional characteristics with temporal dependencies. Our analysis reveals that the proposed Multi-Scale Contextual Attention Network (MCNet) outperforms conventional CNN and ResNet architectures, achieving 82.39% accuracy at high SNR conditions. The network's superior performance stems from its ability to extract multiscale spatiotemporal features through parallel asymmetric convolutions, preserve signal correlations via attention mechanisms, and maintain computational efficiency through optimized layer configurations. These findings provide two key contributions: quantitative benchmarks for model selection in practical implementations, and architectural insights for developing next-generation recognition systems. The study particularly highlights MCNet's robustness in processing high-order QAM/PSK modulations, though challenges remain for low-SNR scenarios.