1. Introduction
The rapid growth of Internet of Things (IoT) devices and increasing data traffic demand advanced signal processing techniques. By 2030, approximately 500 billion devices are expected to be connected to the Internet, necessitating intelligent, adaptive solutions for efficient network management. However, traditional signal processing methods, such as Fourier-based analysis and Kalman filtering, struggle to adapt to dynamic, large-scale wireless environments, particularly in urban IoT and vehicular networks. Beyond 5G (B5G) and 6G networks must overcome challenges related to speed, quality of service (QoS), energy efficiency, privacy protection, and security [1]. Artificial intelligence (AI) has emerged as a promising solution to address these limitations by enabling adaptive spectrum allocation, interference mitigation, and real-time optimization. AI-driven techniques, including deep learning and reinforcement learning, improve wireless channel estimation and spectrum efficiency. However, challenges such as computational complexity and real-time adaptability persist, necessitating a unified, AI-driven framework.
This study introduces an AI-integrated adaptive signal processing framework leveraging Blind Spot Awareness Sensing (BSS), Edge Learning (EL), and Radio Frequency (RF) signal reflection. These methodologies enhance detection accuracy, minimize interference, and optimize communication efficiency, contributing to the development of resilient and intelligent 6G networks. By integrating these techniques, this framework lays the foundation for next-generation autonomous and high-performance wireless systems.
2. Theoretical Foundations
Machine learning has emerged as a transformative approach for addressing the growing challenges in wireless communication systems [2]. Traditional signal processing techniques, such as Fourier analysis and Kalman filtering, rely on explicit mathematical models and often struggle in dynamic network environments. In contrast, machine learning enables adaptive signal processing, optimizing resource allocation and real-time decision-making as network complexity increases.
Artificial neural networks primarily employ supervised, unsupervised, and reinforcement learning to enhance signal classification, spectrum management, and adaptive decision-making [3]. Supervised learning, using labeled datasets, is widely applied in channel estimation, modulation recognition, and signal classification, with Convolutional Neural Networks (CNNs) excelling in wireless spectrum sensing and interference detection. Unsupervised learning, which identifies latent structures in raw data, is particularly useful for blind spectrum sensing and anomaly detection, employing techniques like autoencoders and k-means clustering. Reinforcement learning (RL) optimizes power control, dynamic frequency allocation, and network load balancing through trial-and-error interactions, with advanced methods such as Q-learning and policy gradient techniques improving autonomous spectrum management and resource allocation [4].
A key advantage of machine learning in signal processing is its ability to learn directly from raw data, eliminating the need for predefined feature extraction and enabling greater flexibility and adaptability in managing complex networks [2]. As 6G networks emerge, machine learning will play a crucial role in enhancing efficiency, reducing latency, and enabling self-optimizing communication systems. Future research will focus on reducing computational complexity, improving real-time adaptation, and addressing privacy concerns, ensuring effective large-scale deployment of AI-driven signal processing [3].
3. Methods and Implementation of AI-Driven Signal Processing
3.1. Blind Spot Awareness Sensing (BSS) Technology
Blind spectrum sensing (BSS) plays a critical role in wireless communication networks, enabling detection of unknown signals in dynamic and noisy environments without requiring prior knowledge of signal characteristics. Traditional spectrum sensing methods, such as energy detection and matched filtering, rely on predefined signal models, which limits their adaptability to complex, real-world scenarios [5]. Recent advances in machine learning (ML)-based BSS have significantly enhanced the performance of spectrum sensing by enabling real-time signal classification and anomaly detection.
First, clustering and unsupervised learning techniques, like K-means clustering and Gaussian Mixture Models (GMMs), are widely applied in spectrum anomaly detection and the classification of unknown signals. These methods analyze patterns in cumulative distribution functions (CDFs) from received signals, allowing for differentiation between active transmissions and background noise without requiring predefined templates [5]. This approach is particularly effective in heterogeneous wireless environments, where unknown signals and interference need to be detected dynamically.
Deep learning models have also been explored for feature extraction and classification in wideband spectrum sensing. CNNs effective capture spectral characteristics from time-frequency representations, while Recurrent Neural Networks (RNNs) excel in detecting temporal dependencies, thereby improving detection accuracy under non-stationary conditions. These models facilitate automatic feature learning, making them valuable in dynamic and interference-prone wireless environments.
Hybrid approaches that combine supervised and unsupervised learning address data scarcity issues in BSS. Semi-supervised learning frameworks leverage limited labeled datasets alongside large amounts of unlabeled spectrum data to refine classification models, reducing the dependency on extensive training datasets. These methods improve generalization and adaptability, making them well-suited for applications in cognitive radio networks and next-generation spectrum-sharing systems.
To further enhance the efficiency and accuracy of BSS implementations, various optimization strategies have been developed. A key strategy is Short-Time Fast Fourier Transform (ST-FFT) decomposition, improving computational efficiency and detection accuracy. Instead of applying a single large-scale FFT, ST-FFT divides the input signal into multiple shorter time frames, allowing FFT computations on smaller segments. This decomposition reduces computational inertia, enhances spectral granularity, and improves real-time detection in rapidly changing environments [5].
Additionally, adaptive thresholding for real-time detection dynamically adjusts parameters based on noise estimation and signal conditions, optimizing detection under low-SNR environments. Machine learning-based threshold calibration refines detection boundaries, reducing false alarm rates, while Bayesian and entropy-based methods analyze signal probability distributions for more precise and adaptive spectrum sensing. The integration of ST-FFT and adaptive thresholding significantly enhances real-time spectrum detection, enabling more efficient and robust wireless communication systems.
Adaptive thresholding is particularly useful in cognitive radio networks, where spectrum availability is unpredictable, and IoT systems, where energy efficiency is critical for low-poer sensing devices. By integrating ST-FFT decomposition and adaptive thresholding, modern BSS implementations can achieve higher detection accuracy, reduced computational overhead, and enhanced adaptability to dynamic wireless environments.
3.2. Edge Learning and Distributed Signal Processing
Edge learning (EL) represents a paradigm shift in distributed machine learning for next-generation wireless networks. Unlike traditional cloud-based AI models, which require large-scale data aggregation and centralized training, EL distributes learning tasks across geographically dispersed edge devices. This reduces latency, preserves user privacy, and minimizes network congestion [6].
The architectural framework of EL is built on three key methodologies that enable scalable and intelligent network optimizations. Federated Learning (FL) allows multiple edge nodes to collaboratively train a global model without sharing raw data. Instead of transmitting data to a central server, only model updates (such as gradient changes) are exchanged, significantly reducing communication overhead while maintaining data security. FL is particularly useful in healthcare, autonomous driving, and industrial IoT, where data privacy is paramount. In the Split Learning (SL), deep learning models are divided into separate processing components that operate at different levels, such as client-side and server-side layers. This reduces the computational burden on resource-constrained edge devices while still leveraging cloud-based computing power for complex model training. SL is effective in mobile edge computing scenarios, where lightweight models are deployed on IoT devices, drones, and smart sensors. Multi-Agent Reinforcement Learning (MARL) allows multiple edge devices to act as independent agents, learning from real-time environmental feedback to optimize communication and resource allocation autonomously [6]. This technique is beneficial for dynamic wireless environments, such as 6G-enabled autonomous vehicle networks and Unmanned Aerial Vehicle (UAV) swarms.
In distributed learning environments, efficient communication resource allocation is a major challenge. EL must optimize bandwidth usage, computing power distribution, and latency trade-offs to ensure that learning can be conducted without overwhelming wireless networks. Balancing model accuracy with communication efficiency is crucial, as frequent model updates can lead to network congestion and increased computational overhead. For instance, in 6G-enabled autonomous vehicle networks, EL coordinates real-time learning across multiple vehicles to optimize routing, traffic management, exchanging processed insights instead of raw sensor data to reduce bandwidth consumption and latency [6]. To further enhance communication efficiency, techniques like adaptive compression, model pruning and selective gradient updates are employed, reducing the size of transmitted updates while maintaining model accuracy.
By leveraging FL and reinforcement learning (RL), EL enables distributed knowledge sharing across multiple devices, enhancing edge device intelligence in dynamic and resource-constrained environments. A key application is intelligent spectrum allocation for 6G networks, where FL allows multiple base stations to collaboratively train a spectrum management model, while RL enables autonomously optimization of frequency allocation based on real-time conditions. This hybrid approach ensures efficient spectrum utilization, minimizes congestion, and improves reliability.
By integrating FL and RL, EL enables intelligent, decentralized learning, allowing edge-based AI systems to continuously improve, adapt, and collaborate efficiently. As EL matures, the fusion of these techniques will be instrumental in building AI-native, privacy-preserving, and self-optimizing wireless networks, ensuring seamless and efficient learning across edge devices in 6G and beyond [6].
3.3. Radio Frequency Signal Reflection and Innovative Applications
RF (Radio Frequency) signal reflection technologies exploit the fundamental properties of electromagnetic wave propagation, enabling advanced wireless sensing and communication capabilities beyond conventional methods. When RF waves encounter objects, they undergo scattering, diffraction, and reflection, creating unique signal signatures that can be analyzed to infer environmental conditions, motion, and object characteristics [7]. Compared to traditional radar and ultrasound-based sensing, RF reflection techniques provide superior penetration capabilities and higher resolution, making them suitable for through-wall sensing, health monitoring, and gesture recognition [7].
RF signal reflection technology is being rapidly adopted across various industries due to its ability to extract detailed, real-time environmental information without requiring direct contact. One key application is contactless human activity recognition, where RF reflections are used to track motion, detect posture changes, and monitor gestures. This technology plays a crucial role in smart home automation and elderly care, where continuous, non-intrusive monitoring can improve safety and quality of life for individuals with limited mobility or cognitive impairments. By analyzing subtle variations in signal reflections caused by human movement, AI-powered RF sensing systems can detect falls, abnormal postures, and activity patterns, providing real-time alerts to caregivers. Within industrial IoT and smart cities, RF reflection technology is used to monitor factory environments, detect structural defects in buildings, and optimize urban traffic management. In industrial settings, RF-based sensors can identify machinery wear and tear, predict equipment failures, and ensure workplace safety by detecting hazardous conditions such as gas leaks or overheating components. In smart cities, RF reflection technology enhances traffic monitoring by analyzing vehicle movements and pedestrian flow, enabling dynamic traffic signal adjustments and congestion mitigation strategies.
Implementing RF signal reflection technologies requires advanced signal processing and machine learning techniques to accurately extract and interpret meaningful data from complex reflected wave patterns. Deep learning models, particularly Convolutional Neural Networks (CNNs) and Recurrent Neural Networks (RNNs), have been successfully applied to RF-based human activity recognition and health monitoring. CNNs are effective in analyzing spectrogram representations of reflected signals, allowing AI models to recognize movement patterns and classify different activities. Meanwhile, RNNs and Long Short-Term Memory (LSTM) networks are well-suited for tracking temporal changes in RF reflections, making them ideal for detecting subtle variations in breathing or heartbeat signals.
Reinforcement learning (RL) is also being explored for dynamic RF spectrum adaptation in smart city and IoT applications. RL models can autonomously optimize RF sensor configurations to improve energy efficiency and sensing accuracy in real-time. For example, in autonomous traffic monitoring systems, RL-based models can adjust sensor parameters dynamically to ensure optimal vehicle detection and classification, reducing false alarms and improving traffic flow analysis.
4. Challenges and Prospects of AI-Driven Signal Processing in Wireless Communication
Despite the significant advancements brought by AI-driven signal processing, several key challenges must be addressed before these technologies can be fully integrated into next-generation wireless communication systems.
First, AI models require extensive real-world datasets for training, but data collection in wireless networks is constrained by privacy concerns, security risks, and high-dimensionality issues. Although techniques such as Federated Learning (FL) and differential privacy have been proposed to mitigate privacy risks, they introduce communication overhead and may lead to model degradation in non-Independent and Identically Distributed (non-IID) data scenarios. Additionally, adversarial attacks and model poisoning remain critical threats, necessitating more resilient AI security frameworks for wireless applications. Furthermore, the deployment of AI in wireless networks increases energy consumption, particularly in massive IoT and 6G networks. Optimizing AI inference on low-power devices, implementing energy-efficient neural network architectures, and exploring AI-driven power control strategies are essential for minimizing energy costs while maintaining high communication performance. Green AI techniques, including hardware-efficient AI accelerators and low-power deep learning models, will be key to enabling sustainable AI-driven wireless communication systems.
Despite these challenges, AI-driven signal processing holds enormous potential to revolutionize wireless communication by enhancing efficiency, adaptability, and intelligence. Several promising directions will shape the future development of AI-integrated wireless systems. Specifically, future wireless systems, particularly 6G, are expected to be AI-native, meaning AI will be deeply embedded in network design, optimization, and management. Unlike traditional wireless networks that apply AI as an add-on, AI-native architectures will feature self-learning, self-optimizing, and self-healing capabilities, enabling fully autonomous wireless communication networks.
Moreover, AI-driven cognitive radio networks and autonomous spectrum management will enable wireless systems to dynamically adjust frequencies, bandwidths, and transmission parameters in real-time. This will allow more efficient spectrum utilization, reducing congestion and enhancing network capacity. Furthermore, AI-based edge intelligence will empower autonomous vehicles, UAVs, and smart city infrastructures with real-time decision-making capabilities.
5. Conclusion
This study explored the integration of AI and machine learning in adaptive signal processing for future wireless networks, particularly in Beyond 5G and 6G systems. Through an in-depth analysis of Blind Spot Awareness Sensing, Edge Learning, and Radio Frequency signal reflection, it is demonstrated that AI-driven techniques can significantly improve detection accuracy, anti-interference capabilities, and real-time adaptability in complex wireless environments. The discussion on challenges and future prospects highlighted key obstacles such as computational complexity, data privacy, and energy efficiency, while also emphasizing the transformative potential of AI-native wireless networks, semantic communication, and sustainable AI models.
Despite its contributions, this study has certain limitations. For instance, while theoretical frameworks and recent advancements were reviewed, no experimental validation or simulation results were provided, making it difficult to assess the real-world performance of the proposed methodologies.
Looking ahead, AI-driven adaptive signal processing is expected to play a pivotal role in 6G networks, enabling self-optimizing, highly efficient, and autonomous communication systems. The convergence of AI with quantum computing, reconfigurable intelligent surfaces (RIS), and multi-agent learning will further expand the capabilities of wireless communication, paving the way for next-generation smart infrastructures, autonomous networking, and intelligent spectrum management. By addressing the current challenges and integrating more robust AI architectures, future research will continue to shape the evolution of intelligent and sustainable wireless networks.
References
[1]. R. Shobarani, S. Kumaresh, P. Dhivya, M. J. Bharathi, and S. S. Santhi, “Machine Learning Approaches for Adaptive Signal Processing in 6G Networks,” 2024 IEEE International Conference on Computing, Power and Communication Technologies (IC2PCT), pp. 772–776, Feb. 2024, doi: https://doi.org/10.1109/ic2pct60090.2024.10486222.
[2]. J. Chen, Y. Gao, Y. Zhou, Z. Liu, Da peng Li, and M. Zhang, “Machine Learning enabled Wireless Communication Network System,” 2022 International Wireless Communications and Mobile Computing (IWCMC), May 2022, doi: https://doi.org/10.1109/iwcmc55113.2022.9824835.
[3]. Y. Zhou, J. Chen, M. Zhang, D. Li, and Y. Gao, “Applications of Machine Learning for 5G Advanced Wireless Systems,” 2022 International Wireless Communications and Mobile Computing (IWCMC), Jun. 2021, doi: https://doi.org/10.1109/iwcmc51323.2021.9498754.
[4]. P. P. Patil, A. Perez-Mendoza, K. Joshi, H. Shah, B. G. Pillai, and M. Kalyan Chakravarthi, “Moving toward an intelligent edge: Machine Learning and Wireless Communication,” pp. 358–362, May 2023, doi: https://doi.org/10.1109/icacite57410.2023.10182477.
[5]. J. Nikonowicz and M. Jessa, “Wideband Spectrum Sensing Utilizing Cumulative Distribution Function and Machine Learning,” 2023 International Conference on Software, Telecommunications and Computer Networks (SoftCOM), pp. 1–6, Sep. 2023, doi: https://doi.org/10.23919/softcom58365.2023.10271567.
[6]. J. Nikonowicz and M. Jessa, “Wideband Spectrum Sensing Utilizing Cumulative Distribution Function and Machine Learning,” 2023 International Conference on Software, Telecommunications and Computer Networks (SoftCOM), pp. 1–6, Sep. 2023, doi: https://doi.org/10.23919/softcom58365.2023.10271567.
[7]. K. Kalaiselvi, R. Sankar, S Supriya, G. Kaushik, H. Swamy, and M Devika, “Towards Seamless Connectivity: Implementing 6G Communication Technologies In Next-Generation Networks,” 2024 3rd International Conference for Advancement in Technology (ICONAT), pp. 1–6, Sep. 2024, doi: https://doi.org/10.1109/iconat61936.2024.10775248.
Cite this article
Wang,Z. (2025). AI and Machine Learning Approaches to Adaptive Signal Processing in Future Wireless Networks. Applied and Computational Engineering,150,95-100.
Data availability
The datasets used and/or analyzed during the current study will be available from the authors upon reasonable request.
Disclaimer/Publisher's Note
The statements, opinions and data contained in all publications are solely those of the individual author(s) and contributor(s) and not of EWA Publishing and/or the editor(s). EWA Publishing and/or the editor(s) disclaim responsibility for any injury to people or property resulting from any ideas, methods, instructions or products referred to in the content.
About volume
Volume title: Proceedings of the 3rd International Conference on Software Engineering and Machine Learning
© 2024 by the author(s). Licensee EWA Publishing, Oxford, UK. This article is an open access article distributed under the terms and
conditions of the Creative Commons Attribution (CC BY) license. Authors who
publish this series agree to the following terms:
1. Authors retain copyright and grant the series right of first publication with the work simultaneously licensed under a Creative Commons
Attribution License that allows others to share the work with an acknowledgment of the work's authorship and initial publication in this
series.
2. Authors are able to enter into separate, additional contractual arrangements for the non-exclusive distribution of the series's published
version of the work (e.g., post it to an institutional repository or publish it in a book), with an acknowledgment of its initial
publication in this series.
3. Authors are permitted and encouraged to post their work online (e.g., in institutional repositories or on their website) prior to and
during the submission process, as it can lead to productive exchanges, as well as earlier and greater citation of published work (See
Open access policy for details).
References
[1]. R. Shobarani, S. Kumaresh, P. Dhivya, M. J. Bharathi, and S. S. Santhi, “Machine Learning Approaches for Adaptive Signal Processing in 6G Networks,” 2024 IEEE International Conference on Computing, Power and Communication Technologies (IC2PCT), pp. 772–776, Feb. 2024, doi: https://doi.org/10.1109/ic2pct60090.2024.10486222.
[2]. J. Chen, Y. Gao, Y. Zhou, Z. Liu, Da peng Li, and M. Zhang, “Machine Learning enabled Wireless Communication Network System,” 2022 International Wireless Communications and Mobile Computing (IWCMC), May 2022, doi: https://doi.org/10.1109/iwcmc55113.2022.9824835.
[3]. Y. Zhou, J. Chen, M. Zhang, D. Li, and Y. Gao, “Applications of Machine Learning for 5G Advanced Wireless Systems,” 2022 International Wireless Communications and Mobile Computing (IWCMC), Jun. 2021, doi: https://doi.org/10.1109/iwcmc51323.2021.9498754.
[4]. P. P. Patil, A. Perez-Mendoza, K. Joshi, H. Shah, B. G. Pillai, and M. Kalyan Chakravarthi, “Moving toward an intelligent edge: Machine Learning and Wireless Communication,” pp. 358–362, May 2023, doi: https://doi.org/10.1109/icacite57410.2023.10182477.
[5]. J. Nikonowicz and M. Jessa, “Wideband Spectrum Sensing Utilizing Cumulative Distribution Function and Machine Learning,” 2023 International Conference on Software, Telecommunications and Computer Networks (SoftCOM), pp. 1–6, Sep. 2023, doi: https://doi.org/10.23919/softcom58365.2023.10271567.
[6]. J. Nikonowicz and M. Jessa, “Wideband Spectrum Sensing Utilizing Cumulative Distribution Function and Machine Learning,” 2023 International Conference on Software, Telecommunications and Computer Networks (SoftCOM), pp. 1–6, Sep. 2023, doi: https://doi.org/10.23919/softcom58365.2023.10271567.
[7]. K. Kalaiselvi, R. Sankar, S Supriya, G. Kaushik, H. Swamy, and M Devika, “Towards Seamless Connectivity: Implementing 6G Communication Technologies In Next-Generation Networks,” 2024 3rd International Conference for Advancement in Technology (ICONAT), pp. 1–6, Sep. 2024, doi: https://doi.org/10.1109/iconat61936.2024.10775248.