Drone detection with radio frequency signals and deep learning models

Research Article
Open access

Drone detection with radio frequency signals and deep learning models

Xuanze Dai 1*
  • 1 Dalian University of Technology    
  • *corresponding author xuanzedai@gmail.com
Published on 15 March 2024 | https://doi.org/10.54254/2755-2721/47/20241230
ACE Vol.47
ISSN (Print): 2755-273X
ISSN (Online): 2755-2721
ISBN (Print): 978-1-83558-335-7
ISBN (Online): 978-1-83558-336-4

Abstract

The widespread use of drones raises security, environmental, privacy, and ethical issues; therefore, effective detection by drones is important. There are several methods for detecting drones, such as wireless signal detection, photoelectric detection, radar detection, and sound detection. However, these detection methods are not accurate enough to identify drones for use. To solve this question, more robust drone detection method are needed. In addition, for different types of drones and application scenarios, different technical means need to be used for detection and identification. Based on 2-class ,4-class and 10-class problems on an open ratio frequency (RF) signal dataset, we compared the drone detection and classification performances of different machine learning with deep learning models and multi-task models which is proposed by combining different RF methods with Convolutional neural networks (CNNs). Our experimental results show that the XGBoost model achieved the latest results on this groundbreaking dataset, with 99.96% accuracy for 2-class problem, 92.31% accuracy for 4-class problem, and 74.81% accuracy for 10-class problem, which exhibits the best performance for drone detection and classification.

Keywords:

Deep learning, Machin learning, Drone detection, Radio Frequency Signal

Dai,X. (2024). Drone detection with radio frequency signals and deep learning models. Applied and Computational Engineering,47,92-100.
Export citation

1. Introduction

Drone technology has been continuously developed and improved, including drone design, manufacturing, flight control, communication, and navigation. The development of these technologies has supported and guaranteed the wide application of drones. With the expansion of production scale and the advancement of technology, the cost of drones has gradually decreased so that more people and enterprises can afford to use drones. The application scenarios of drones continue to expand, and includ aerial photography, agriculture, logistics distribution, security monitoring, and other fields. The expansion of these application scenarios provides broader space for the development of drones. With increasing demand and awareness of drones, the market demand for drones is also growing. This growth in demand provides a broader market space for drone development. Many countries and regions have issued drone-related policies and regulations, providing support and guarantees for the legal use of drones and further promoting the development of drones. In summary, there are many reasons for drone success, including technological progress, cost reduction, expansion of application scenarios, growth in market demand, and support from policies and regulations [1].

However, there are also some problems with the use of drones, mainly reflected in the following aspects.

1. Security issues: Drones may be used for illegal activities, such as carrying drugs, illegal transportation, and terrorist attacks. Therefore, the development of efficient, accurate, and reliable detection technologies is necessary to ensure safety.

2. Environmental issues: Drones may have an impact on the environment during flight, affecting birds and other aerial vehicles. Therefore, it is necessary to predict and avoid potential conflicts between drones and the environment using detection technology.

3. Privacy and ethical issues: The use of drones may raise privacy and ethical issues, such as the possibility that drones may be used to spy on others, invade personal spaces, etc. Therefore, relevant regulations and norms must be formulated to ensure the legality and rationality of drone detection and use.

In summary, unmanned aerial vehicle (UAV) detection is of great significance for maintaining security, protecting the environment, and protecting privacy and ethics [2,3].

Various approaches have been taken to detect drones, including:

1. Wireless signal detection: The existence of drones can be detected by monitoring their radio signals emitted by drones. A wireless spectrum analyzer or detector can be used to monitor the graphic or navigational signal of the drone to determine its position.

2. Photoelectric detection: Through the use of optical sensors, infrared sensors, and other equipment, the optical signal or thermal signal issued by the drone can be detected and identified. This method is suitable for day and night and can effectively detect drones hidden in the dark or at a distance.

3. Radar system: Radar is a technology that is widely used in drone detection. The return signal of the drone can be detected using radar equipment, and its position, speed, and track information can be calculated. The radar system is particularly suitable for large-range and low-altitude drone detection requirements.

4. Sound detection: The drone produces a specific noise during flight, and the sound detection equipment can identify this noise and the location of the drone.

5. Directional strike: The use of laser weapons, artillery shells and other directional weapons can strike and destroy drones.

In addition, there are other technical means, such as drone countermeasure technology and face recognition technology, that can also be used to detect and identify drones [4, 5]. In short, for different types of drones and application scenarios, different technical means need to be used for detection and identification [6, 7].

In this study, we first review different drone detection methods and summarize the relevant literature. Subsequently, based on an open RF signal dataset, we compared the drone detection and classification performance of different machine learning and deep learning models. Our experimental results show that the XGBoost model exhibits the best performance for drone detection and classification.

2. Related Work

2.1. Radar-based Methods

In a pioneering effort documented in [8], an extensive dataset was curated featuring both drones and avian species. This dataset was meticulously compiled using the Aveillant Gamekeeper 16U drone discrimination radar. Subsequently, signal processing techniques were applied to derive Doppler signatures, resulting in a comprehensive 4-D data matrix. The dataset comprised a substantial 2048 data segments, originating from an equal number of radar pulses. Dally H. and colleagues scrutinized this dataset, highlighting the pivotal role of signal-to-noise ratio (SNR) in influencing the performance of Convolutional Neural Network (CNN)-based classifiers for drone and bird discrimination.

Remarkably, the research demonstrated that at an SNR of 20 dB, prominent CNN architectures including Inception-v3, Resnet-18, and Resnet-50 excelled in distinguishing birds with an impressive 99% accuracy rate. However, their performance with respect to drones was notably lackluster, yielding an accuracy rate of only around 50%. Squeezenets exhibited a contrasting performance by correctly identifying 93% of drones but misclassifying a substantial 46% of birds. Meanwhile, AlexNet exhibited a balanced performance by accurately identifying 70% of drones and 95% of birds, achieving an overall accuracy rate of 81.3%. AlexNet showcased exceptional prowess across various metrics, encompassing classification accuracy, false positive rate, training time, and classification decision time, rendering it the most apt choice among the networks investigated.

In an analogous vein, a distinct open drone dataset was harnessed in [9], presenting raw RF signals originating from drones across four distinct scenarios. This extensive dataset encompassed a total of 227 segments, amounting to a formidable 40GB of data captured within the 2.4GHz ISM band. The dataset featured the intensity profiles of both low- and high-frequency RF signals. Intriguingly, the dataset composition was diverse, comprising 41 background signals devoid of drones, 84 from the Bebop drone, 81 from the AR drone, and 21 from the phantom drone. Akter and co-authors introduced a neural network-based system christened CNN-SSDI, tailored for precise drone detection.

Comparative analysis unveiled CNN-SSDI as the frontrunner, boasting a remarkable accuracy rate of 94.5%. In summary, CNN-SSDI unequivocally outperformed contemporary counterparts reliant on machine learning paradigms.

Furthering the discourse, [10] introduced novel drone datasets meticulously crafted through a data generation approach emulating Martin and Mulgrew signals. Each unique combination of radar specifications and SNR values yielded a dataset housing a rich repository of 1,000 training samples, represented as spectrograms. These datasets further encompassed three dedicated test datasets, each containing 350 samples. Raval and associates introduced a specialized Martin and Mulgrew model aimed at expedited drone classification. Through empirical evaluation, the study underscored the superior performance of the X-band model, achieving an F1 score of 0.816 ± 0.011 when trained with 10 dB SNR data. Nevertheless, it's important to note that this model exhibited a propensity to generate false alarms in cases where UAV types were ambiguous.

In the quest for comprehensive drone discrimination, [11] featured a dataset housing Radar Cross-Section (RCS) data for six distinct drone models, acquired at various frequencies and aspect angles. Notably, this dataset underwent thorough preprocessing involving the calculation of Intrinsic Mode Functions via Empirical Mode Decomposition, resulting in a compact representation comprising 20 essential features. Roychowdhury and their team diligently applied an array of model types to the training dataset, encompassing Ensemble, K-Nearest Neighbors (KNN), Linear Discriminant Analysis (LDA), Naive Bayes, Recurrent Neural Networks (RNN), Support Vector Machines (SVM), and Decision Trees.

Evaluative insights unveiled the Ensemble Model as the top performer, yielding an accuracy of 87.6543%. SVM closely followed, showcasing robust performance with an accuracy of 83.3333%. Importantly, as the noise levels escalated, the average accuracy of these two models remained within the commendable range of 75% to 90%. In summary, the Ensemble Model and SVM emerged as the standout choices among the ensemble of models considered for drone classification.

2.2. Visual-based Methods

In the realm of visual-based methods, Dadrass Javan and colleagues (Reference [12]) introduced a novel approach that leveraged a modified YOLOv4 deep learning network to perform robust detection of both avian species and four distinct drone types. Their experimentation employed a publicly available drone and bird dataset, comprising an extensive repository of 26,000 visible images. The dataset underwent meticulous preprocessing and annotation, facilitated by the Computer Vision Annotation Tool (CVAT), to select optimal bounding rectangles encompassing the drones. Notably, the dataset exhibited a diverse categorization scheme, stratifying the data into five distinct classes: multirotors, helicopters, fixed-wing aircraft, VTOL (Vertical Takeoff and Landing) aircraft, and birds.

Following an intensive training regimen spanning 30,000 iterations, the authors compared the performance of their modified YOLOv4 implementation against the baseline YOLOv4 model. The modified approach achieved a loss of 0.58, while the basic YOLOv4 model registered a slightly higher loss at 0.68. Impressively, the modified model yielded superior results, boasting an accuracy of 83%, a mean Average Precision (mAP) of 83%, and Intersection over Union (IoU) score of 84%, outpacing the basic model by 4%. These findings underscore the efficacy of the revised model for drone and bird recognition.

In a complementary exploration, Kabir M. S. and colleagues (Citation [13]) presented an investigation that delved into three distinct single-shot detectors built upon the foundations of YOLOv4, YOLOv5, and DETR architectures. Their research introduced a dedicated drone dataset featuring 2,000 images, which were standardized to a resolution of 416×416 pixels and meticulously annotated. The YOLOv4-based model exhibited an Average Precision (AP) of 0.94 with an average Intersection over Union (avg. IOU) of 0.80, while the YOLOv5-based model excelled, achieving the highest AP of 0.99 alongside an avg. IOU of 0.84. In contrast, the DETR-based model yielded a relatively lower AP of 0.89, accompanied by an avg. IOU of 0.77. Evidently, the YOLOv5-based model emerged as the top performer in this comparison, showcasing outstanding drone detection capabilities with an average precision exceeding 89%.

Shifting the focus to another study [14], a drone-centric dataset encompassing a comprehensive assemblage of 10,000 images, including multirotors, helicopters, and avian specimens, played a pivotal role. The dataset was thoughtfully partitioned, with 70% of the images reserved for training, and the remainder allocated for validation. Samadzadegan F. and co-authors pioneered a novel deep learning methodology designed for efficient drone detection and classification, specifically targeting two drone types while distinguishing them from avian subjects. Their model exhibited commendable accuracy, successfully discerning multirotors with 76% precision, helicopters with 86% precision, and birds with a remarkable 90% precision. In summation, this model emerged as an adept solution for the intricate tasks of drone detection and recognition.

In the domain of drone detection and payload identification, Ajakwe S.O. and colleagues [15] introduced an innovative approach characterized by the utilization of two distinct manually generated datasets. These datasets collectively comprised 5,460 drone samples, painstakingly captured under varying conditions, encompassing different drone types, heights, and operational scenarios. Of particular significance, 1,790 of these drone samples were acquired in conjunction with their associated payloads. The research culminated in the formulation of a vision-based multitasking anti-drone framework for the comprehensive detection of drones. Remarkably, the model achieved an exceptional 99.6% accuracy in multi-UAV detection, coupled with a 99.80% sensitivity in recognizing attached objects. The F1 score, standing at 99.69%, further validated the model's prowess. In essence, this model represents a highly effective solution characterized by minimal errors and low computational complexity, ideally suited for the rigorous demands of drone detection applications.

2.3. Acoustic-based Methods

In the realm of acoustic-based methods, a comprehensive drone dataset was meticulously assembled at a site in South Australia, comprising 49 ECM800 10 mV/Pa condenser microphones adept at capturing the sound field at a sampling rate of 44.1 kHz (Reference [16]). Concurrently, sensors affixed to the drones diligently logged critical data points, including GPS coordinates and local meteorological conditions, operating at a rate of 1 Hz. The dataset is thoughtfully segmented into three distinct categories: DJI Matrice 600, Skywalker X-8, and DJI Mavic Air. Harnessing this dataset, Fang et al. embarked on an exploration of bio-inspired signal processing techniques, incorporating both narrowband and broadband time-frequency processing methods for acoustic drone detection.

The narrowband technique was tailored for deployment exclusively with the DJI Matrice 600, resulting in a notable 33% augmentation in its maximum detection range. Meanwhile, the utilization of the broadband technique, amenable to all drone types, significantly bolstered the maximum detection range by 48.6%, 33.7%, and 30.2%, respectively, for the DJI Matrice 600, Skywalker X-8, and DJI Mavic Air. Collectively, these empirical findings underscore the substantial enhancement in acoustic UAV detection ranges achievable through biologically inspired signal processing methodologies.

In a distinct avenue of inquiry (Reference [17]), a diverse dataset encompassing multirotor UAVs and corresponding background audio was meticulously curated. This dataset drew upon a combination of online sources and real-world data collection setups, rendering it a valuable resource. The audio content underwent standardization, being reformatted to mono-channel with a 16-bit resolution and a uniform sampling frequency of 44.1 kHz. The dataset was thoughtfully segregated into two distinct subsets: a training audio dataset, comprising an amalgamation of artificial background audio and drone-related audio featuring various multirotor UAV sounds, and an unseen audio testing dataset. Casabiana and Zhang introduced three distinct neural network models, engineered to analyze mel-spectrograms and deliver comparative results.

Through a comprehensive evaluation process, it was discerned that the CRNN and CNN models exhibited superior performance, with the RNN models lagging behind. Furthermore, the authors explored late fusion models, identifying the weighted soft vote CNN model as the most adept among the four integrated models. Taken collectively, the performance results underscored the efficacy of the CNN model for acoustic UAV detection applications, yielding an average accuracy of 94.7%. Notably, the CNN model operated adeptly as a solo performer, recording an average accuracy of 93.0%. These findings accentuate the suitability of the CNN model for acoustic UAV detection.

Turning attention to another investigative endeavor (Reference [18]), the study leveraged a drone-centric dataset captured at 10-second intervals, sampled at a rate of 44,100 Hz. This dataset comprised 68,931 frames dedicated to drone sounds and an additional 41,958 frames containing nondrone acoustic signatures. Seo et al. introduced a CNN-based model tailored specifically for drone detection. In experiments involving hovering drones, including DJI Phantom 3 and Phantom 4, the 100-epoch model demonstrated a remarkable detection rate of 98.97% and a false alarm rate of 1.28%. Even the 10-epoch model exhibited impressive performance, registering a detection rate of 98.77% and a false alarm rate of 1.62%. These outcomes underscore the model's robust capabilities in the domain of acoustic drone detection, particularly in scenarios featuring hovering drones.

2.4. RF-based Methods

In the realm of RF-based methods, researchers in Reference [19] embarked on an exploration employing the Bird-Vs-Drone dataset as their experimental foundation. This dataset encompassed a total of 2,727 frames, boasting a resolution of 1920 × 1080 pixels, and derived from five distinct MPEG4 encoded videos captured at different time intervals. Saqib et al. undertook a comprehensive assessment of various object detectors specifically tailored for drone detection, utilizing a training regime carried out on an Nvidia Quadro P6000 GPU. The training process adhered to a learning rate of 0.0001 and a batch size of 64.

The outcome of this diligent experimentation revealed that VGG16 excelled, achieving a mean Average Precision (mAP) score of 0.66 at the 80,000th iteration. In comparison, the ZF model attained a mAP score of 0.61 at the 100,000th iteration. Ultimately, the findings underscored the superior performance of VGG16 on the training dataset within the context of drone detection.

Another notable contribution in RF-based methodology, Reference [20], harnessed the open-source DroneRF dataset. This dataset encompassed signals of 0.25-second duration, recorded at a sampling rate of 40 MHz. Comprising 227 low-band drone signals and an equal number of high-band drone signals, this dataset meticulously captured various operating modes of drones, constituting a wealth of valuable data. Kılıç et al. introduced an innovative approach, leveraging well-established spectrum-based audio features, including Power Spectral Density (PSD), Mel-Frequency Cepstral Coefficients (MFCC), and Linear Frequency Cepstral Coefficients (LFCC), for employment in SVM-based machine learning algorithms.

The empirical results vividly demonstrated the prowess of the proposed method, with Class 2 features, based on PSD, MFCC, and LFCC, achieving a remarkable accuracy rate of 100%. Class 4 features, relying on MFCC and LFCC, achieved a robust accuracy rate of 98.67%, while Class 10 features, rooted in LFCC, reached an accuracy rate of 95.15%. Collectively, these outcomes underscore the exceptional performance of the proposed approach within the RF-based drone detection domain.

In a parallel exploration (Reference [21]), researchers introduced a bespoke drone acoustic sample dataset, featuring recorded propeller noise emanating from two commercially available drones, presented in mono WAV format and sampled at a rate of 44.1 kHz. This dataset encompassed a total of 1,332 acoustic samples spanning the positive and negative classes. Salman et al. meticulously selected and scrutinized five distinct features for audio drone detection: mel-frequency cepstral coefficients, gammatone cepstral coefficients, linear prediction coefficients, spectral roll-off, and zero-crossing rate.

The experimental findings unequivocally positioned gammatone cepstral coefficients as the most effective feature for audio drone detection. Remarkably, a medium Gaussian SVM model, trained on the complete set of study features, yielded remarkable results, boasting a classification accuracy of 99.9%, a recall rate of 99.8%, and an overall accuracy of 100%. These exceptional metrics firmly established the model as an exemplar in the domain of audio drone detection, surpassing even the most advanced existing methods in the field.

2.5. Multimodal Methods

An open-source drone dataset was used in [22], which was collected using two RF receivers in four functioning modes. The dataset was divided into 227 fragments, each consisting of two equally sized sections, each containing one million samples for a total of 454 record files. Akter R et al. proposed a multitask learning (MTL) neural network for drone detection. The accuracy of the model can reach 100% when the UAV signal was separated from the interference signal, and the accuracy of the UAV type recognition can reach 96.70%.Considering 4 models, the accuracy of this model is 74.72%. Compared to the other three deep learning models, we found that the proposed multitasking model outperforms these CNN models. To sum up, it is very successful and feasible to use radio frequency signal combined with CNN for UAV detection and identification.

In an effort to bolster the arsenal of detection methodologies, Reference [23] introduced a comprehensive dataset that encompassed both image and audio samples, catering to a diverse range of objects including birds, airplanes, kites, balloons, and drones. This dataset thoughtfully combined 217 audio samples, proportioned with a 70% allocation for training and 30% for testing, alongside 506 images stratified into five distinct object classes, with a parallel 70% training and 30% testing distribution. Jamil et al. embarked on an exploration leveraging a fusion of handcrafted descriptors and Convolutional Neural Networks (CNNs) to detect potentially malicious Unmanned Aerial Vehicles (UAVs), capitalizing on the image samples. Furthermore, they employed Mel-Frequency Cepstral Coefficients (MFCC) and Linear Prediction Cepstral Coefficients (LPCC) for the identification of malevolent UAVs, drawing insights from the audio dataset.

Intriguingly, the findings illuminated the limited efficiency of handcrafted descriptors in the realm of malicious UAV detection, achieving a maximum accuracy of 82.7%. However, the application of AlexNet, fortified by a linear or polynomial kernel for SVM classification, yielded the highest accuracy at 97.4%. Notably, the deployment of MFCC emerged as a remarkably effective tool for UAV detection, especially when harnessed in conjunction with a Gaussian kernel for SVM. The amalgamation of MFCC and AlexNet features culminated in a remarkable accuracy of 98.5%. In summation, the proposed hybrid model effectively showcased its prowess in the realm of malicious UAV detection.

Expanding the horizon to encompass both visual and acoustic domains, Reference [24] capitalized on a multifaceted dataset captured across three distinct airports during daylight hours. This dataset amalgamated 90 audio clips and 650 videos, culminating in a rich repository comprising 203,328 meticulously annotated images. Svanström et al. ingeniously fashioned a drone detection system harnessing a gamut of machine learning techniques, standard video and audio sensors, and thermal infrared cameras, designed to identify drones with precision.

Upon rigorous evaluation of the dataset, the infrared detector demonstrated a commendable F1 score of 0.7601, while the audio classifier outperformed, attaining an F1 score of 0.9323. It is noteworthy, however, that the absence of publicly available datasets and efficient evaluation methodologies introduced inherent challenges in appraising the holistic performance of the entire detection system. In a broader context, the proposed system showcased a commendable degree of effectiveness, particularly in the discernment of diminutive objects.

3. Dataset Description

The drone and controller communicate through the WiFi channel; therefore, scanning the frequency band where the WiFi is located can obtain radio frequency radios. Signal acquisition module: Two NI USRP-2943R RF receivers were used as sampling devices (40 MHz × 2) and LabView was used as the signal acquisition software. Sampling time:5.25s (when UAV exists) and 10.25s (when there is no UAV). Raw data volume: approximately 40G, including 227 pieces of data. Different levels of detection and classification problems (two, four, and ten classifications). In the signal processing part, the discrete Fourier transform is directly used for the feature extraction of sampling points (2048 frequency points), and the data volume is reduced to 482MB.

4. Methods

The feature-extraction step was implemented using the fft function in MATLAB. The machine learning models were implemented using Python and the scikit-learn and TensorFlow packages. We evaluated the performance of each model with an average accuracy of 10-fold cross validation, and the optimal hyperparameters were grid-searched from the range shown in Table 1.

Table 1. Hyperparameters for different models used in this study.

Model

Parameter

Value Range

XGBoost

max_depth

[5, 10, 15, 20, 25]

n_estimators

[20, 50, 100, 200]

AdaBoost

max_depth

[5, 10, 15, 20, 25]

n_estimators

[20, 50, 100, 200]

Decision Tree

max_depth

[5, 10, 15, 20, 25]

Randon Forest

max_depth

[5, 10, 15, 20, 25]

n_estimators

[20, 50, 100, 200]

k-Nearest Neighbors

n_neighbors

[3, 5, 10]

Deep Neural Network

hidden_layer_sizes

[(50,50,50), (50,100,50), (100,)]

activation function

['tanh', 'relu']

5. Results

The accuracy results for each model are summarized in Table 2. For detecting 2-class problem with drones, all models we used achieved satisfactory performance with an accuracy of more than 99%. However, for the 4-class problem of classifying the existence and types of drones and the 10-class problem of classifying the existence, types, and flight modes of drones, the accuracy decreased significantly, leaving significant room for improvement in further research.

At present, XGBoost has achieved the latest results on this groundbreaking dataset, with 99.96% accuracy for 2-class problem, 92.31% accuracy for 4-class problem, and 74.81% accuracy for 10-class problem. Because the DNN used is not sufficiently deep, it is worth investigating in the future to assess whether deeper and larger neural networks can perform better than traditional machine learning techniques, especially XGBoost.

Table 2. Experiment results for different models.

Model

2-class

4-class

10-class

XGBoost

0.9996

0.9231

0.7481

AdaBoost

0.9992

0.8364

0.4981

Decision Tree

0.9985

0.8267

0.4329

Randon Forest

0.9971

0.8678

0.4937

k-Nearest Neighbors

0.9954

0.8739

0.5819

Deep Neural Network

0.9979

0.8127

0.4872

6. Conclusion

Based on an open RF signal dataset, we compared the drone detection and classification performances of different machine learning and deep learning models. Our experimental results show that the XGBoost model exhibits the best UAV detection and classification performance. XGBoost achieved the latest results on this groundbreaking dataset, with 99.96% accuracy for 2-class problem, 92.31% accuracy for 4-class problem, and 74.81% accuracy for 10-class problem. Drone detection can be further considered in space-air-ground integrated networks, in which drones are used as relay nodes [25]. Distributed multi-agent learning [26] and crowd sensing [27] can also be used for effective UAV detection and classification when data are collected using a collaborative approach, and the detection model is trained using a distributed approach.


References

[1]. Maamar Z, Kajan E, Asim M, et al. Open challenges in vetting the internet‐of‐things[J]. Internet Technology Letters, 2019, 2(5): e129.

[2]. Pugliese R, Regondi S, Marini R. Machine learning-based approach: Global trends, research directions, and regulatory standpoints[J]. Data Science and Management, 2021, 4: 19-29.

[3]. Zhao M, Zhang Y. GAN‐based deep neural networks for graph representation learning[J]. Engineering Reports, 2022, 4(11): e12517.

[4]. Chen X, Li H, Li C, et al. Single Image Dehazing Based on Sky Area Segmentation and Image Fusion[J]. IEICE TRANSACTIONS on Information and Systems, 2023, 106(7): 1249-1253.

[5]. Zheng Y, Jiang W. Evaluation of vision transformers for traffic sign classification[J]. Wireless Communications and Mobile Computing, 2022, 2022.

[6]. Jiang W. Cellular traffic prediction with machine learning: A survey[J]. Expert Systems with Applications, 2022, 201: 117163.

[7]. Jiang W. Graph-based deep learning for communication networks: A survey[J]. Computer Communications, 2022, 185: 40-54.

[8]. Dale H, Baker C, Antoniou M, et al. A Comparison of Convolutional Neural Networks for Low SNR Radar Classification of Drones[C]//2021 IEEE Radar Conference (RadarConf21). IEEE, 2021: 1-5.

[9]. Akter R, Doan V S, Lee J M, et al. CNN-SSDI: Convolution neural network inspired surveillance system for UAVs detection and identification[J]. Computer Networks, 2021, 201: 108519.

[10]. Raval D, Hunter E, Hudson S, et al. Convolutional Neural Networks for Classification of Drones Using Radars[J]. Drones, 2021, 5(4): 149.

[11]. Roychowdhury S, Ghosh D. Machine Learning Based Classification of Radar Signatures of Drones[C]//2021 2nd International Conference on Range Technology (ICORT). IEEE, 2021: 1-5.

[12]. Dadrass Javan F, Samadzadegan F, Gholamshahi M, et al. A Modified YOLOv4 Deep Learning Network for Vision-Based UAV Recognition[J]. Drones, 2022, 6(7): 160.

[13]. Kabir M S, Ndukwe I K, Awan E Z S. Deep Learning Inspired Vision based Frameworks for Drone Detection[C]//2021 International Conference on Electrical, Communication, and Computer Engineering (ICECCE). IEEE, 2021: 1-5.

[14]. Samadzadegan F, Dadrass Javan F, Ashtari Mahini F, et al. Detection and Recognition of Drones Based on a Deep Convolutional Neural Network Using Visible Imagery[J]. Aerospace, 2022, 9(1): 31.

[15]. Ajakwe S O, Ihekoronye V U, Kim D S, et al. DRONET: Multi-Tasking Framework for Real-Time Industrial Facility Aerial Surveillance and Safety[J]. Drones, 2022, 6(2): 46.

[16]. Fang J, Finn A, Wyber R, et al. Acoustic detection of unmanned aerial vehicles using biologically inspired vision processing[J]. The Journal of the Acoustical Society of America, 2022, 151(2): 968-981.

[17]. Casabianca P, Zhang Y. Acoustic-Based UAV Detection Using Late Fusion of Deep Neural Networks[J]. Drones, 2021, 5(3): 54.

[18]. Seo Y, Jang B, Im S. Drone detection using convolutional neural networks with acoustic stft features[C]//2018 15th IEEE International Conference on Advanced Video and Signal Based Surveillance (AVSS). IEEE, 2018: 1-6.

[19]. Saqib M, Khan S D, Sharma N, et al. A study on detecting drones using deep convolutional neural networks[C]//2017 14th IEEE International Conference on Advanced Video and Signal Based Surveillance (AVSS). IEEE, 2017: 1-5.

[20]. Kılıç R, Kumbasar N, Oral E A, et al. Drone classification using RF signal based spectral features[J]. Engineering Science and Technology, an International Journal, 2021.

[21]. Salman S, Mir J, Farooq M T, et al. Machine learning inspired efficient audio drone detection using acoustic features[C]//2021 International Bhurban Conference on Applied Sciences and Technologies (IBCAST). IEEE, 2021: 335-339.

[22]. Akter R, Doan V S, Zainudin A, et al. An Explainable Multi-Task Learning Approach for RF-based UAV Surveillance Systems[C]//2022 Thirteenth International Conference on Ubiquitous and Future Networks (ICUFN). IEEE, 2022: 145-149.

[23]. Jamil S, Rahman M U, Ullah A, et al. Malicious UAV detection using integrated audio and visual features for public safety applications[J]. Sensors, 2020, 20(14): 3923.

[24]. Svanström F, Englund C, Alonso-Fernandez F. Real-Time Drone Detection and Tracking With Visible, Thermal and Acoustic Sensors[C]//2020 25th International Conference on Pattern Recognition (ICPR). IEEE, 2021: 7265-7272.

[25]. Jiang W. Software defined satellite networks: A survey[J]. Digital Communications and Networks, 2023.

[26]. Jiang W, He M, Gu W. Internet Traffic Prediction with Distributed Multi-Agent Learning[J]. Applied System Innovation, 2022, 5(6): 121.

[27]. Jiang W. PhD Forum Abstract: Crowd Sensing with Execution Uncertainty[C]//2017 16th ACM/IEEE International Conference on Information Processing in Sensor Networks (IPSN). IEEE, 2017: 251-252.


Cite this article

Dai,X. (2024). Drone detection with radio frequency signals and deep learning models. Applied and Computational Engineering,47,92-100.

Data availability

The datasets used and/or analyzed during the current study will be available from the authors upon reasonable request.

Disclaimer/Publisher's Note

The statements, opinions and data contained in all publications are solely those of the individual author(s) and contributor(s) and not of EWA Publishing and/or the editor(s). EWA Publishing and/or the editor(s) disclaim responsibility for any injury to people or property resulting from any ideas, methods, instructions or products referred to in the content.

About volume

Volume title: Proceedings of the 4th International Conference on Signal Processing and Machine Learning

ISBN:978-1-83558-335-7(Print) / 978-1-83558-336-4(Online)
Editor:Marwan Omar
Conference website: https://www.confspml.org/
Conference date: 15 January 2024
Series: Applied and Computational Engineering
Volume number: Vol.47
ISSN:2755-2721(Print) / 2755-273X(Online)

© 2024 by the author(s). Licensee EWA Publishing, Oxford, UK. This article is an open access article distributed under the terms and conditions of the Creative Commons Attribution (CC BY) license. Authors who publish this series agree to the following terms:
1. Authors retain copyright and grant the series right of first publication with the work simultaneously licensed under a Creative Commons Attribution License that allows others to share the work with an acknowledgment of the work's authorship and initial publication in this series.
2. Authors are able to enter into separate, additional contractual arrangements for the non-exclusive distribution of the series's published version of the work (e.g., post it to an institutional repository or publish it in a book), with an acknowledgment of its initial publication in this series.
3. Authors are permitted and encouraged to post their work online (e.g., in institutional repositories or on their website) prior to and during the submission process, as it can lead to productive exchanges, as well as earlier and greater citation of published work (See Open access policy for details).

References

[1]. Maamar Z, Kajan E, Asim M, et al. Open challenges in vetting the internet‐of‐things[J]. Internet Technology Letters, 2019, 2(5): e129.

[2]. Pugliese R, Regondi S, Marini R. Machine learning-based approach: Global trends, research directions, and regulatory standpoints[J]. Data Science and Management, 2021, 4: 19-29.

[3]. Zhao M, Zhang Y. GAN‐based deep neural networks for graph representation learning[J]. Engineering Reports, 2022, 4(11): e12517.

[4]. Chen X, Li H, Li C, et al. Single Image Dehazing Based on Sky Area Segmentation and Image Fusion[J]. IEICE TRANSACTIONS on Information and Systems, 2023, 106(7): 1249-1253.

[5]. Zheng Y, Jiang W. Evaluation of vision transformers for traffic sign classification[J]. Wireless Communications and Mobile Computing, 2022, 2022.

[6]. Jiang W. Cellular traffic prediction with machine learning: A survey[J]. Expert Systems with Applications, 2022, 201: 117163.

[7]. Jiang W. Graph-based deep learning for communication networks: A survey[J]. Computer Communications, 2022, 185: 40-54.

[8]. Dale H, Baker C, Antoniou M, et al. A Comparison of Convolutional Neural Networks for Low SNR Radar Classification of Drones[C]//2021 IEEE Radar Conference (RadarConf21). IEEE, 2021: 1-5.

[9]. Akter R, Doan V S, Lee J M, et al. CNN-SSDI: Convolution neural network inspired surveillance system for UAVs detection and identification[J]. Computer Networks, 2021, 201: 108519.

[10]. Raval D, Hunter E, Hudson S, et al. Convolutional Neural Networks for Classification of Drones Using Radars[J]. Drones, 2021, 5(4): 149.

[11]. Roychowdhury S, Ghosh D. Machine Learning Based Classification of Radar Signatures of Drones[C]//2021 2nd International Conference on Range Technology (ICORT). IEEE, 2021: 1-5.

[12]. Dadrass Javan F, Samadzadegan F, Gholamshahi M, et al. A Modified YOLOv4 Deep Learning Network for Vision-Based UAV Recognition[J]. Drones, 2022, 6(7): 160.

[13]. Kabir M S, Ndukwe I K, Awan E Z S. Deep Learning Inspired Vision based Frameworks for Drone Detection[C]//2021 International Conference on Electrical, Communication, and Computer Engineering (ICECCE). IEEE, 2021: 1-5.

[14]. Samadzadegan F, Dadrass Javan F, Ashtari Mahini F, et al. Detection and Recognition of Drones Based on a Deep Convolutional Neural Network Using Visible Imagery[J]. Aerospace, 2022, 9(1): 31.

[15]. Ajakwe S O, Ihekoronye V U, Kim D S, et al. DRONET: Multi-Tasking Framework for Real-Time Industrial Facility Aerial Surveillance and Safety[J]. Drones, 2022, 6(2): 46.

[16]. Fang J, Finn A, Wyber R, et al. Acoustic detection of unmanned aerial vehicles using biologically inspired vision processing[J]. The Journal of the Acoustical Society of America, 2022, 151(2): 968-981.

[17]. Casabianca P, Zhang Y. Acoustic-Based UAV Detection Using Late Fusion of Deep Neural Networks[J]. Drones, 2021, 5(3): 54.

[18]. Seo Y, Jang B, Im S. Drone detection using convolutional neural networks with acoustic stft features[C]//2018 15th IEEE International Conference on Advanced Video and Signal Based Surveillance (AVSS). IEEE, 2018: 1-6.

[19]. Saqib M, Khan S D, Sharma N, et al. A study on detecting drones using deep convolutional neural networks[C]//2017 14th IEEE International Conference on Advanced Video and Signal Based Surveillance (AVSS). IEEE, 2017: 1-5.

[20]. Kılıç R, Kumbasar N, Oral E A, et al. Drone classification using RF signal based spectral features[J]. Engineering Science and Technology, an International Journal, 2021.

[21]. Salman S, Mir J, Farooq M T, et al. Machine learning inspired efficient audio drone detection using acoustic features[C]//2021 International Bhurban Conference on Applied Sciences and Technologies (IBCAST). IEEE, 2021: 335-339.

[22]. Akter R, Doan V S, Zainudin A, et al. An Explainable Multi-Task Learning Approach for RF-based UAV Surveillance Systems[C]//2022 Thirteenth International Conference on Ubiquitous and Future Networks (ICUFN). IEEE, 2022: 145-149.

[23]. Jamil S, Rahman M U, Ullah A, et al. Malicious UAV detection using integrated audio and visual features for public safety applications[J]. Sensors, 2020, 20(14): 3923.

[24]. Svanström F, Englund C, Alonso-Fernandez F. Real-Time Drone Detection and Tracking With Visible, Thermal and Acoustic Sensors[C]//2020 25th International Conference on Pattern Recognition (ICPR). IEEE, 2021: 7265-7272.

[25]. Jiang W. Software defined satellite networks: A survey[J]. Digital Communications and Networks, 2023.

[26]. Jiang W, He M, Gu W. Internet Traffic Prediction with Distributed Multi-Agent Learning[J]. Applied System Innovation, 2022, 5(6): 121.

[27]. Jiang W. PhD Forum Abstract: Crowd Sensing with Execution Uncertainty[C]//2017 16th ACM/IEEE International Conference on Information Processing in Sensor Networks (IPSN). IEEE, 2017: 251-252.