GNSS Spoofing Detection Based on Wavelets and Machine Learning
Abstract
:1. Introduction
- The application of different types of wavelets including db4 and db8 and one-level decomposition of images obtained by using the discrete wavelet transform on Oak Ridge Spoofing and Interference Test Battery (OAKBAT) GPS and Galileo datasets in static scenarios for GNSS spoofing detection.
- The detection of fake signals and classification of signal type using an integrated approach based on the discrete wavelet transform and machine learning models.
- The comparison of the performance of machine learning models using different types of input classification data.
- Reducing the computational complexity of the proposed approach by using pre-extracted classification input data.
2. Related Works
3. Methodology
3.1. Proposed Approach
3.2. Implementation Details and OAKBAT Datasets
3.3. Discrete Wavelet Transform
- Decomposing the signal into various frequency components, enabling feature extraction that may indicate spoofing attack.
- Classifying signals as either authentic or fake using machine learning methods.
- Computational efficiency makes it suitable for real-time processing where immediate detection is critical, such as in GNSS receivers.
- The admissibility condition means that the wavelet function must have zero mean:
- The normalized condition means that the wavelet function has finite energy:
3.4. Application of Machine Learning Models for Spoofing Attack Detection
3.4.1. Support Vector Machine (SVM)
3.4.2. k-Nearest Neighbors (KNN)
- Select k that defines how many neighbors will be checked to determine the class of a particular data point. For example, if , the instance is assigned to the same class as its nearest neighbor.
- Determine the distance metric. In order to determine which data points are closest to the point for which the class is to be determined, the distance between the observed point and the other data points should be calculated. These distance metrics help form decision boundaries, which divide the question points into different regions. Given the new data point, the algorithm calculates the distance between the new point and all points in the training dataset to find the nearest neighbors of the new data point.
- Based on the calculated distance, k-nearest neighbors are selected. The neighbors are the k training points that are closest to the new data point. In the case of binary classification, the decision boundary is the line that separates the two classes. On the other hand, in the case of a multi-class classification, the boundary is the hyperplane that separates the different classes.
- Assign a class to a new data point. After finding the k-nearest neighbors, the algorithm assigns a new data point to the class that is most common among its k-closest neighbors. In other words, the algorithm uses the majority class k-nearest neighbors as the predicted class for the new data point.
- The algorithm repeats the previous steps for each new data point.
3.4.3. Random Forest (RF)
- A subset of the same n data is randomly taken from the original dataset, but with repetition, which means that some patterns may appear multiple times while others may be omitted entirely. In this way, the so-called bootstrap pattern is formed, which is used to train one decision tree, thus reducing overtraining (each tree sees only part of the data) and increasing the diversity of the model (each tree learns on a different pattern). This process is repeated for each tree in the model.
- A random subset of m features is selected from a total set of p features. This helps to reduce the correlation between the trees.
- A decision tree is built for each bootstrap pattern and selected features using a specific separation criterion (the previously mentioned Gini impurity and mean mutual information content). The process is repeated until a predetermined stop criterion is met (e.g., maximum tree depth, minimum number of samples per leaf).
- The final prediction (class) of the model is obtained by adding the predictions of all decision trees and is defined as
3.5. Model Performance Parameters
- A confusion matrix, also known as an error matrix, is a special type of table that allows the performance of an algorithm to be visualized. Each row in the matrix represents patterns in the actual class, and each column represents patterns in the predicted class. The confusion matrix consists of four fields: true positive , true negative , false positive , and false negative [47].
- Accuracy is defined as the number of correctly predicted samples divided by the total number of predicted samples and is calculated as
- Precision or positive predictive value is the number of really positive predicted samples divided by the total number of positive predicted samples (). The value of this parameter is between 0 and 1.Low precision ( 0.5) means that the classifier has a large number of false positive samples that may be the result of an unbalanced class or unadjusted model hyperparameters. When the precision value is close to 1, it means the model has not missed any real positive results and is able to classify correct and incorrect sample labeling well.
- The recall or sensitivity or real positive rate gives the percentage of positive results well predicted by a particular model. In other words, it gives the proportion of real positive samples that the model has correctly classified.A low recall () means that the classifier has a large number of false negative samples that may be the result of an unbalanced class or unadjusted model hyperparameters. Ideally, , i.e., all positive samples are marked as such by the classifier. Ideally, all positively classified samples are really positive (), and, conversely, all positive samples are also classified as positive ().
- F1-score is a key parameter in evaluating the performance of classification models. F1 measures the performance of the model by balancing the precision and recall which are two directly opposing measures (high response means low accuracy and vice versa), providing a unique score that reflects both false positives and false negatives. It is the harmonic mean of precision and recall.scores range from the best value of 1 to the worst value of 0. For example, if and , the -score is 0.178, while the arithmetic mean would be 0.4, and therefore a stricter criterion should be taken when evaluating classifiers.
3.6. Features Used for Classification
- The mean () represents the arithmetic mean of all the values in the dataset (in this case, the approximation coefficients A). It provides information about the “center” of data distribution. It is defined as:
- The standard deviation () measures the dispersion of data around the mean. The higher the standard deviation, the farther away the data are from the mean. It is calculated as the square root of the variance:
- The variance () shows the average square distance of the value from the mean. It provides insight into the range and variety of data and is crucial for assessing noise or variability in the signal. It is calculated as follows:
- The maximum value represents the highest coefficient in the dataset. It is useful for detecting peaks in a signal that may indicate sudden changes.
- The minimum value represents the smallest coefficient in the dataset, informs about the lower limit of the signal, and can also indicate exceptions or specific patterns.
- The kurtosis quantifies the “spikeness” of a data distribution. A high kurtosis means that most of the values are concentrated around the middle, with the presence of “heavy tails” (outliers), while a low kurtosis indicates a more even distribution. the kurtosis is calculated according to the equation
- The skewness measures the symmetry of a distribution. If the skewness is positive, the tail of the distribution is longer on the right (right-hand asymmetry), while a negative skewness indicates a tail on the left (left-hand asymmetry). The equation for the skewness is the following:
- The spectral centroid () indicates the dominant frequency, i.e., where the center of mass of the spectrum is located. Also, it refers to the median of the spectrum and is calculated as follows:
- The spectral spread () is the second central moment of the spectrum. It is the standard deviation around the spectral centroid
- The spectral entropy () is a measure of the disorder or randomness in the spectral power distribution of a signal. It is calculated using the Shannon entropy formula
- Mean histogram value is the mean value of all pixel intensities in a given color channel (e.g., red, green, blue). It is called mean histogram value because the mean is calculated by using a histogram as the distribution. It provides information about the overall brightness of the image—lighter images have larger average values, while darker images have smaller average values.
- The standard deviation of the histogram measures the dispersion of pixel intensity values from the mean. A large standard deviation indicates images with high contrast or large differences in brightness between pixels, while a small deviation indicates a more even distribution of brightness.
- The mean gradient magnitude represents the average strength of the edges of the image, i.e., how quickly the intensity changes between adjacent pixels. Higher values indicate images with pronounced edges and details.
- The standard deviation of the gradient shows the variation in the strength of the edges. If there is a large variation, it means that the image has a variety of textures, from smooth areas to areas with pronounced details.The following features are Gray Level Co-occurrence Matrix (GLCM) texture features. The GLCM considers the relationships between neighboring pixels, providing rich texture information.
- Contrast (C) measures the difference between the intensity of adjacent pixels. High contrast indicates coarser textures, while low contrast indicates smoother or more uniform textures.
- Energy represents the uniformity of texture. Higher energy values indicate that the image has regular or repeating patterns, while lower values indicate more random pixel arrangements.
- Homogeneity measures the similarity of intensity between adjacent pixels. High homogeneity indicates smooth textures without large differences in intensity.
4. Results and Discussion
Computational Complexity
- Loading IQ data from .bin file: , where S is the total number of samples per iteration; , where D is the number of samples in the observed particle of the signal, and P is the number of particles into which the signal is divided per iteration.
- Conversion of a 1D complex signal to a square matrix (combination ) to a 2D matrix of dimensions nxn: .
- Single-use 2D DWT: .
- Extraction of statistical characteristics for the first level of approximation coefficients: .
- Extraction of spectral features and FFT: .
- Saving features to .csv file: .
- Generating and saving DWT image (A1img) in .tiff format: .
- Feature extraction from the .tiff image:
- Histogram per RGB channel: , where C is the number of channels of the image (three channels for RGB).
- Gradient magnitude: .
- Textural features: , where G is the number of GLCM metrics (e.g., contrast, correlation, energy, and homogeneity).
5. Conclusions
Author Contributions
Funding
Data Availability Statement
Conflicts of Interest
Abbreviations
AUC | Area Under Curve |
BOC | Binary Offset Carrier |
BoF | Bag-of-Features |
CNN | Convolutional Neural Networks |
DoA | Direction of Arrival |
DWT | Discrete Wavelet Transform |
FFT | Fast Fourier Transform |
FPR | False Positive Rate |
GLCM | Gray Level Co-Occurrence Matrix |
GNSS | Global Navigation Satellite System |
GPS | Global Positioning System |
I/Q | In-phase and Quadrature |
KNN | k-Nearest Neighbors |
ML | Machine Learning |
NMEA | National Marine Electronics Association |
NN | Neural Networks |
OAKBAT | Oak Ridge Spoofing and Interference Test Battery |
OOB | Out-of-Bag |
PR | Precision Recall |
RBF | Radial Basis Function |
RF | Random Forest |
RFF | Radio Frequency Fingerprinting |
RGB | Red, Green, and Blue |
ROC | Receiver Operating Characteristic |
SDR | Software-Defined Radio |
SQM | Signal Quality Monitoring |
SURF | Speeded Up Robust Features) |
SVM | Support Vector Machine |
TEXBAT | Texas Spoofing Test Battery |
ToA | Time of Arrival |
TPR | True Positive Rate |
References
- Humphreys, T.E.; Ledvina, B.M.; Psiaki, M.L.; O’Hanlon, W.B.; Kintner, P.M. Assessing the Spoofing Threat: Development of a Portable GPS Civilian Spoofer. In Proceedings of the 21st International Technical Meeting of the Satellite Division of The Institute of Navigation (ION GNSS Conference), Savannah, GA, USA, 16–19 September 2008. [Google Scholar]
- Babić, K. Spoofing Signal Detection in Global Navigation Satellite System. Ph.D. Thesis, University of Split, Faculty of Electrical Engineering, Mechanical Engineering and Naval Architecture, Split, Croatia, 2025. [Google Scholar]
- Zhang, L.; Wang, L.; Wu, R.; Zhuang, X. A new approach for GNSS spoofing detection using power and signal quality monitoring. Meas. Sci. Technol. 2024, 35, 126109. [Google Scholar] [CrossRef]
- Lee, D.K.; Miralles, D.; Akos, D.; Konovaltsev, A.; Kurz, L.; Lo, S.; Nedelkov, F. Detection of GNSS Spoofing using NMEA Messages. In Proceedings of the European Navigation Conference (ENC), Dresden, Germany, 23–24 November 2020. [Google Scholar] [CrossRef]
- Truong, V.; Vervisch-Picois, A.; Rubio Hernan, J.; Samama, N. Characterization of the Ability of Low-Cost GNSS Receiver to Detect Spoofing Using Clock Bias. Sensors 2024, 23, 2735. [Google Scholar] [CrossRef] [PubMed]
- Yang, Q.; Chen, Y. A GPS Spoofing Detection Method Based on Compressed Sensing. In Proceedings of the IEEE International Conference on Signal Processing, Communications and Computing (ICSPCC), Xi’an, China, 25–27 October 2022. [Google Scholar] [CrossRef]
- Jafarnia-Jahromi, A.; Broumandan, A.; Nielsen, J.; Lachapelle, G. GPS vulnerability to spoofing threats and a review of antispoofing techniques. Int. J. Navig. Obs. 2012, 2012, 127072. [Google Scholar] [CrossRef]
- Lee, Y.S.; Yeom, J.S.; Jung, B.C. A Novel Array Antenna-Based GNSS Spoofing Detection and Mitigation Technique. In Proceedings of the IEEE 20th Consumer Communications & Networking Conference (CCNC), Las Vegas, NV, USA, 8–11 January 2023. [Google Scholar] [CrossRef]
- Li, J.; Li, W.; He, S.; Dai, Z.; Fu, Q. Research on Detection of Spoofing Signal with Small Delay Based on KNN. In Proceedings of the IEEE 3rd International Conference on Electronics Technology (ICET), Chengdu, China, 8–12 May 2020. [Google Scholar] [CrossRef]
- Yang, B.; Tian, M.; Ji, Y.; Cheng, J.; Xie, Z.; Shao, S. Research on GNSS Spoofing Mitigation Technology Based on Spoofing Correlation Peak Cancellation. IEEE Commun. Lett. 2022, 26, 3024–3028. [Google Scholar] [CrossRef]
- Meng, L.; Yang, L.; Yang, W.; Zhang, L. A Survey of GNSS Spoofing and Anti-Spoofing Technology. Remote Sens. 2022, 14, 4826. [Google Scholar] [CrossRef]
- Khoei, T.T.; Gasimova, A.; Ahajjam, M.A.; Shamaileh, K.A.; Devabhaktuni, V.; Kaabouch, N. A Comparative Analysis of Supervised and Unsupervised Models for Detecting GPS Spoofing Attack on UAVs. In Proceedings of the 2022 IEEE International Conference on Electro Information Technology (eIT), Minnesota State University, Mankato, MN, USA, 19–21 May 2022. [Google Scholar] [CrossRef]
- Shafique, A.; Mehmood, A.; Elhadef, M. Detecting Signal Spoofing Attack in UAVs Using Machine Learning Models. IEEE Access 2021, 9, 93803–93815. [Google Scholar] [CrossRef]
- Gallardo, F.; Yuste, A.P. SCER Spoofing Attacks on the Galileo Open Service and Machine Learning Techniques for End-User Protection. IEEE Access 2020, 8, 85515–85532. [Google Scholar] [CrossRef]
- Semanjski, S.; Semanjski, I.; De Wilde, W.; Muls, A. Use of Supervised Machine Learning for GNSS Signal Spoofing Detection with Validation on Real-World Meaconing and Spoofing Data—Part I. Sensors 2020, 20, 1171. [Google Scholar] [CrossRef]
- Chen, Z.; Li, J.; Li, J.; Zhu, X.; Li, C. GNSS Multiparameter Spoofing Detection Method Based on Support Vector Machine. IEEE Sens. J. 2022, 22, 17864–17874. [Google Scholar] [CrossRef]
- Elango, A.; Ujan, S.; Ruotsalainen, L. Disruptive GNSS Signal detection and classification at different Power levels Using Advanced Deep-Learning Approach. In Proceedings of the International Conference on Localization and GNSS (ICL-GNSS), Tampere, Finland, 7–9 June 2022. [Google Scholar] [CrossRef]
- Borhani-Darian, P.; Li, H.; Wu, P.; Pau, C. Detecting GNSS spoofing using deep learning. EURASIP J. Adv. Signal Process. 2024, 2024, 14. [Google Scholar] [CrossRef]
- Marchand, M.; Toumi, A.; Seco-Granados, G.; Lopez-Salcedo, J.A. Machine Learning Assessment of Anti-Spoofing Techniques for GNSS Receivers. In Proceedings of the WIPHAL 2023: Work-in-Progress in Hardware and Software for Location Computation, CEUR Workshop Proceedings, Castellon, Spain, 6–8 June 2023. [Google Scholar]
- Kuciapinski, K.S.; Temple, M.A.; Klein, R.W. ANOVA-based RF DNA analysis: Identifying significant parameters for device classification. In Proceedings of the International Conference on Wireless Information Networks and Systems (WINSYS), Athens, Greece, 26–28 July 2010. [Google Scholar]
- Danev, B.; Zanetti, D.; Capkun, S. On Physical-Layer Identification of Wireless Devices. ACM Comput. Surv. 2012, 45, 6. [Google Scholar] [CrossRef]
- Baldini, G.; Giuliani, R.; Steri, G.; Neisse, R. Physical layer authentication of Internet of Things wireless devices through permutation and dispersion entropy. In Proceedings of the Global Internet of Things Summit (GIoTS), Geneva, Switzerland, 6–9 June 2017. [Google Scholar]
- Baldini, G.; Gentile, C.; Giuliani, R.; Steri, G. Comparison of techniques for radiometric identification based on deep convolutional neural networks. Electron. Lett. 2019, 55, 62–114. [Google Scholar] [CrossRef]
- Fadul, M.K.M.; Reising, D.; Sartipi, M. Identification of OFDM-Based Radios Under Rayleigh Fading Using RF-DNA and Deep Learning. IEEE Access 2021, 9, 17100–17113. [Google Scholar] [CrossRef]
- Gahlawat, S. Investigation of RF Fingerprinting Approaches in GNSS. Ph.D. Thesis, Tampere University, Tampere, Finland, 2020. [Google Scholar]
- Wang, W.; Aguilar Sanchez, I.; Caparra, G.; McKeown, A.; Whitworth, T.; Lohan, E. A Survey of Spoofer Detection Techniques via Radio Frequency Fingerprinting with Focus on the GNSS Pre-Correlation Sampled Data. Sensors 2021, 21, 3012. [Google Scholar] [CrossRef]
- Morales-Ferre, R.; Wang, W.; Sanz-Abia, A.; Lohan, E.S. Identifying GNSS Signals Based on Their Radio Frequency (RF) Features—A Dataset with GNSS Raw Signals Based on Roof Antennas and Spectracom Generator. Data 2020, 5, 18. [Google Scholar] [CrossRef]
- Wang, W.; Lohan, E.S.; Sanchez, I.A.; Caparra, G. Pre-correlation and post-correlation RF fingerprinting methods for GNSS spoofer identification with real-field measurement data. In Proceedings of the 10th Workshop on Satellite Navigation Technology (NAVITEC), Noordwijk, The Netherlands, 4–8 April 2022. [Google Scholar] [CrossRef]
- Radoš, K.; Brkić, M.; Begušić, D. GNSS Signal Classification based on Machine Learning Methods. In Proceedings of the 47th MIPRO ICT and Electronics Convention (MIPRO), Opatija, Croatia, 20–24 May 2024. [Google Scholar] [CrossRef]
- Li, J.; Wu, H.; Gao, J.; Liu, F.; Zhang, Y.; Li, G. Performance Testing and Analysis of a New GNSS Spoofing Detection Method in Different Spoofing Scenarios. IEEE Access 2025, 13, 54779–54793. [Google Scholar] [CrossRef]
- Zhang, X.; Huang, Y.; Tian, Y.; Lin, M.; An, J. Noise-Like Features Assisted GNSS Spoofing Detection Based on Convolutional Autoencoder. IEEE Sens. J. 2023, 23, 25473–25486. [Google Scholar] [CrossRef]
- Humphreys, T.E.; Bhatti, J.A.; Shepard, D.P.; Wesson, K.D. The Texas Spoofing Test Battery: Toward a Standard for Evaluating GPS Signal Authentication Techniques. In Proceedings of the 25th International Technical Meeting of the Satellite Division of The Institute of Navigation, Nashville, TN, USA, 17–21 September 2012. [Google Scholar]
- Manfredini, E.; Dovis, F.; Motella, B. Validation of a signal quality monitoring technique over a set of spoofed scenarios. In Proceedings of the Proceedings of the 7th ESA Workshop on Satellite Navigation Technologies and European Workshop on GNSS Signals and Signal Processing (NAVITEC), Noordwijk, The Netherlands, 3–5 December 2014. [Google Scholar] [CrossRef]
- Chengjun, G.; Zhongpei, Y. Robust RF Fingerprint Extraction Scheme for GNSS Spoofing Detection. In Proceedings of the 36th International Technical Meeting of the Satellite Division of The Institute of Navigation (ION GNSS+ 2023), Denver, CO, USA, 11–15 September 2023. [Google Scholar] [CrossRef]
- Mehr, I.; Dovis, F. A deep neural network approach for classification of GNSS interference and jammer. IEEE Trans. Aerosp. Electron. Syst. 2024, 61, 1660–1676. [Google Scholar] [CrossRef]
- Albright, A.; Powers, S.; Bonior, J.; Combs, F. Oak Ridge Spoofing and Interference Test Battery (OAKBAT); Oak Ridge National Lab. (ORNL): Oak Ridge, TN, USA, 2020. [CrossRef]
- Mallat, S. Wavelet Tour of Signal Processing; Academic Press: Cambridge, MA, USA, 2008. [Google Scholar]
- Daubechies, I. Ten Lectures on Wavelets; SIAM: Philadelphia, PA, USA, 1992. [Google Scholar]
- Strang, G.; Nguyen, T. Wavelets and Filter Banks; Wellesley-Cambridge Press: Wellesley, MA, USA, 1996. [Google Scholar]
- Cortes, C.; Vapnik, V. Support-vector networks. Mach. Learn. 1995, 20, 273–297. [Google Scholar] [CrossRef]
- Fix, E.; Hodges, J.L. Discriminatory Analysis. Nonparametric Discrimination: Consistency Properties. Int. Stat. Rev./Rev. Int. De Stat. 1989, 57, 238–247. [Google Scholar] [CrossRef]
- Cover, T.; Hart, P. Nearest Neighbor Pattern Classification. IEEE Trans. Inf. Theory 1967, 13, 21–27. [Google Scholar] [CrossRef]
- Čulić Gambiroža, J. Machine Learning Methods for Efficient Data Reduction and Reconstruction in the concept of Internet of Things. Ph.D. Thesis, University of Split, Faculty of Electrical Engineering, Mechanical Engineering and Naval Architecture, Split, Croatia, 2023. [Google Scholar]
- Ho, T.K. The Random Subspace Method for Constructing Decision Forests. IEEE Trans. Pattern Anal. Mach. Intell. 1998, 20, 832–844. [Google Scholar] [CrossRef]
- Breiman, L.; Friedman, J.H.; Olshen, R.A.; Stone, C.J. Classification and Regression Trees; Routledge: Abingdon, UK, 2017. [Google Scholar]
- Breiman, L. Random Forests. Mach. Learn. 2001, 45, 5–32. [Google Scholar] [CrossRef]
- Šnajder, J. Machine Learning 1, 21. Model Evaluation; University of Zagreb, Faculty of electrical engineering and computing: Zagreb, Croatia, 2023. [Google Scholar]
- Joanes, D.N.; Gill, C.A. Comparing Measures of Sample Skewness and Kurtosis. J. R. Stat. Soc. Ser. D 1998, 47, 183–189. [Google Scholar] [CrossRef]
- Giannakopoulos, T.; Pikrakis, A. Audio Features. In Introduction to Audio Analysis: A MATLAB Approach; Elsevier: Amsterdam, The Netherlands, 2014; pp. 59–103. [Google Scholar] [CrossRef]
- Pilanci, M. EE269 Signal Processing for Machine Learning—Lecture 3 Part II: Spectral Features. 2021. Available online: https://web.stanford.edu/class/ee269/Lecture3_spectral_features.pdf (accessed on 22 February 2025).
- Kulkarni, N. Use of complexity based features in diagnosis of mild Alzheimer disease using EEG signals. Int. J. Inf. Technol. 2018, 10, 59–64. [Google Scholar] [CrossRef]
- Gonzalez, R.C.; Woods, R.E. Digital Image Processing, 3rd ed.; Pearson Prentice Hall: Upper Saddle River, NJ, USA, 2008. [Google Scholar]
- Pratt, W.K. Digital Image Processing: PIKS Scientific Inside, 4th ed.; Wiley-Interscience: Hoboken, NJ, USA, 2007. [Google Scholar]
- Bovik, A.C. (Ed.) Handbook of Image and Video Processing; Academic Press: San Diego, CA, USA, 2000. [Google Scholar]
- Haralick, R.M.; Shanmugam, K.; Dinstein, I. Textural Features for Image Classification. IEEE Trans. Syst. Man Cybern. 1973, SMC-3, 610–621. [Google Scholar] [CrossRef]
- Hall-Beyer, M. GLCM Texture: A Tutorial v. 3.0. 2017. Available online: https://prism.ucalgary.ca/handle/1880/51900 (accessed on 15 March 2025).
- Shalev-Shwartz, S.; Ben-David, S. Support Vector Machines. In Understanding Machine Learning: From Theory to Algorithms; Cambridge University Press: Cambridge, UK, 2014; Chapter 15; pp. 315–340. [Google Scholar]
- Bottou, L.; Lin, C.J. Support Vector Machine Solvers. In Large Scale Kernel Machines; Bottou, L., Chapelle, O., DeCoste, D., Weston, J., Eds.; MIT Press: Cambridge, MA, USA, 2007; pp. 301–320. [Google Scholar]
- Schölkopf, B.; Smola, A.J. Learning with Kernels: Support Vector Machines, Regularization, Optimization, and Beyond; Adaptive Computation and Machine Learning; MIT Press: Cambridge, MA, USA, 2002. [Google Scholar]
- Duda, R.O.; Hart, P.E.; Stork, D.G. Pattern Classification, 2nd ed.; Wiley-Interscience: New York, NY, USA, 2001. [Google Scholar]
Dataset | Power Level Advantage [dB] | Constellation |
---|---|---|
cleanStatic | − | GPS, Galileo |
os2 | +10 | GPS |
os3 | +1.3 | GPS |
os4 | +0.4 | GPS |
os10 | +10 | Galileo |
os11 | +1.3 | Galileo |
os12 | +0.4 | Galileo |
Dataset and | ML | Mean | F1 | Recall | Precision | Elapsed |
---|---|---|---|---|---|---|
Coefficient Type | Method | Acc. (%) | Score | Time (s) | ||
db4 | ||||||
SVM | 98.94 | 0.9895 | 0.9882 | 0.9907 | 606 | |
(clean + os2) approx. | KNN | 96.77 | 0.9683 | 0.952 | 0.9852 | 589 |
RF-100 | 95.43 | 0.9537 | 0.9658 | 0.9419 | 594 | |
RF-200 | 95.31 | 0.9524 | 0.9675 | 0.9378 | 1202 | |
SVM | 78.20 | 0.7829 | 0.7792 | 0.7867 | 764 | |
(clean + os4) approx. | KNN | 75.82 | 0.7553 | 0.7642 | 0.7467 | 585 |
RF-100 | 77.33 | 0.7742 | 0.7713 | 0.777 | 617 | |
RF-200 | 77 | 0.7711 | 0.7693 | 0.773 | 741 | |
SVM | 99.77 | 0.9978 | 0.9989 | 0.9967 | 9.4 | |
(clean + os2) approx. | KNN | 99.72 | 0.9972 | 0.9956 | 0.9989 | 2.47 |
RF-100 | 99.75 | 0.9976 | 0.9985 | 0.9967 | 6.3 | |
RF-200 | 99.78 | 0.9978 | 0.9989 | 0.9967 | 10.28 | |
SVM | 96 | 0.9605 | 0.9493 | 0.9719 | 2.16 | |
(clean + os4) approx. | KNN | 91.94 | 0.9209 | 0.9043 | 0.9381 | 3.13 |
RF-100 | 93.93 | 0.9398 | 0.931 | 0.9489 | 6.64 | |
RF-200 | 94.41 | 0.9445 | 0.9373 | 0.9519 | 12.4 | |
SVM | 99.68 | 0.9969 | 0.9967 | 0.997 | 6.8 | |
(clean + os2) approx. | KNN | 99.93 | 0.9993 | 0.9993 | 0.9993 | 3.46 |
RF-100 | 100 | 1 | 1 | 1 | 5.2 | |
RF-200 | 100 | 1 | 1 | 1 | 8 | |
SVM | 99.72 | 0.9972 | 0.9956 | 0.9989 | 5.1 | |
(clean + os4) approx. | KNN | 99.93 | 0.9993 | 0.9985 | 1 | 2.9 |
RF-100 | 99.98 | 0.9998 | 1 | 0.9996 | 7.1 | |
RF-200 | 100 | 0.9996 | 1 | 0.9993 | 7.9 | |
db8 | ||||||
SVM | 99.43 | 0.9943 | 0.9944 | 0.9941 | 568 | |
(clean + os2) approx. | KNN | 96.02 | 0.9614 | 0.934 | 0.9904 | 528 |
RF-100 | 95.66 | 0.9564 | 0.9621 | 0.9507 | 522 | |
RF-200 | 95.88 | 0.9586 | 0.9651 | 0.9522 | 531 | |
SVM | 75.69 | 0.7597 | 0.7508 | 0.7689 | 628 | |
(clean + os4) approx. | KNN | 75.18 | 0.7506 | 0.7545 | 0.7467 | 637 |
RF-100 | 75.53 | 0.758 | 0.7499 | 0.7663 | 632 | |
RF-200 | 75.6 | 0.7615 | 0.7445 | 0.7793 | 654 | |
SVM | 99.91 | 0.9991 | 0.9989 | 0.9993 | 3.3 | |
(clean + os2) approx. | KNN | 99.85 | 0.9985 | 0.9981 | 0.9989 | 2.9 |
RF-100 | 99.87 | 0.9987 | 0.9974 | 1 | 6.3 | |
RF-200 | 99.83 | 0.9983 | 0.9967 | 1 | 10 | |
SVM | 96.02 | 0.9606 | 0.9507 | 0.9707 | 3.5 | |
(clean + os4) approx. | KNN | 94.33 | 0.9442 | 0.9306 | 0.9581 | 4 |
RF-100 | 93.76 | 0.9378 | 0.9342 | 0.9145 | 8.8 | |
RF-200 | 94.13 | 0.9418 | 0.9344 | 0.9493 | 15 | |
SVM | 99.78 | 0.9978 | 0.9985 | 0.997 | 4.9 | |
(clean + os2) approx. | KNN | 99.98 | 0.9998 | 0.9996 | 1 | 2.8 |
RF-100 | 100 | 1 | 1 | 1 | 4.87 | |
RF-200 | 100 | 1 | 1 | 1 | 7.8 | |
SVM | 99.70 | 0.997 | 0.9981 | 0.9959 | 5.3 | |
(clean + os4) approx. | KNN | 99.85 | 0.9985 | 0.9996 | 0.9974 | 3.7 |
RF-100 | 99.98 | 0.9998 | 1 | 0.9996 | 5.87 | |
RF-200 | 100 | 1 | 1 | 1 | 7.85 |
Dataset and | ML | Mean | F1 | Recall | Precision | Elapsed |
---|---|---|---|---|---|---|
Coefficient Type | Method | Acc. (%) | Score | Time (s) | ||
db4 | ||||||
SVM | 93.94 | 0.9346 | 0.9322 | 0.937 | 684 | |
(clean + os10) approx. | KNN | 86.63 | 0.8657 | 0.8693 | 0.8622 | 722 |
RF-100 | 89.46 | 0.895 | 0.8922 | 0.8978 | 569 | |
RF-200 | 90.35 | 0.9039 | 0.9001 | 0.9078 | 604 | |
SVM | 75 | 0.7512 | 0.7483 | 0.7541 | 522 | |
(clean + os12) approx. | KNN | 74.72 | 0.762 | 0.7199 | 0.8093 | 510 |
RF-100 | 74.70 | 0.7481 | 0.745 | 0.7511 | 548 | |
RF-200 | 73.94 | 0.738 | 0.742 | 0.7341 | 533 | |
SVM | 97 | 0.9701 | 0.9785 | 0.9619 | 8.7 | |
(clean + os10) approx. | KNN | 92.74 | 0.9272 | 0.9303 | 0.9241 | 2.3 |
RF-100 | 95.63 | 0.9563 | 0.9563 | 0.9563 | 6.5 | |
RF-200 | 95.55 | 0.9556 | 0.9552 | 0.9559 | 11.6 | |
SVM | 76.43 | 0.7612 | 0.7712 | 0.7515 | 3.6 | |
(clean + os12) approx. | KNN | 74.35 | 0.7438 | 0.7431 | 0.7444 | 2 |
RF-100 | 77.29 | 0.7718 | 0.7758 | 0.7678 | 7.45 | |
RF-200 | 77.77 | 0.777 | 0.7799 | 0.7741 | 13.14 | |
SVM | 99.87 | 0.9987 | 0.9989 | 0.9985 | 4.5 | |
(clean + os10) approx. | KNN | 99.88 | 0.9989 | 0.9978 | 1 | 1.9 |
RF-100 | 99.94 | 0.9994 | 0.9989 | 1 | 5.04 | |
RF-200 | 99.94 | 0.9994 | 0.9989 | 1 | 8.72 | |
SVM | 99.33 | 0.9933 | 0.9941 | 0.9926 | 4.4 | |
(clean + os12) approx. | KNN | 98.83 | 0.9885 | 0.9786 | 0.9985 | 2 |
RF-100 | 99.88 | 0.9989 | 0.9993 | 0.9985 | 5.2 | |
RF-200 | 99.83 | 0.9983 | 0.9989 | 0.9978 | 8.9 | |
db8 | ||||||
SVM | 93.59 | 0.9362 | 0.9321 | 0.9404 | 599.51 | |
(clean + os10) approx. | KNN | 87.09 | 0.8686 | 0.8394 | 0.9 | 562.73 |
RF-100 | 86.74 | 0.8671 | 0.8693 | 0.8648 | 568.87 | |
RF-200 | 87.54 | 0.8753 | 0.8761 | 0.8744 | 586.87 | |
SVM | 75.30 | 0.7508 | 0.7573 | 0.7444 | 564.41 | |
(clean + os12) approx. | KNN | 75.48 | 0.7554 | 0.7537 | 0.757 | 597.62 |
RF-100 | 75.11 | 0.7521 | 0.7491 | 0.7552 | 272.4 | |
RF-200 | 75.19 | 0.7483 | 0.7591 | 0.7378 | 583.12 | |
SVM | 91.81 | 0.9175 | 0.9244 | 0.9107 | 5.52 | |
(clean + os10) approx. | KNN | 95.20 | 0.9521 | 0.9509 | 0.9533 | 2.78 |
RF-100 | 96.26 | 0.9626 | 0.9616 | 0.9637 | 7 | |
RF-200 | 95.83 | 0.9584 | 0.9571 | 0.9596 | 10.49 | |
SVM | 75.22 | 0.7466 | 0.764 | 0.73 | 2.62 | |
(clean + os12) approx. | KNN | 76.5 | 0.7620 | 0.7717 | 0.7526 | 1.94 |
RF-100 | 76.59 | 0.7636 | 0.7711 | 0.7563 | 7.69 | |
RF-200 | 75.85 | 0.7591 | 0.7572 | 0.7611 | 12.46 | |
SVM | 93.46 | 0.934 | 0.9433 | 0.9248 | 10.85 | |
(clean + os10) approx. | KNN | 95.02 | 0.9504 | 0.9467 | 0.9541 | 3.07 |
RF-100 | 96.11 | 0.9614 | 0.9554 | 0.9674 | 17.76 | |
RF-200 | 96.28 | 0.9629 | 0.9589 | 0.9967 | 11.94 | |
SVM | 74.54 | 0.7431 | 0.7497 | 0.7367 | 5.12 | |
(clean + os12) approx. | KNN | 76.85 | 0.7666 | 0.773 | 0.7604 | 8.09 |
RF-100 | 76.67 | 0.7671 | 0.7657 | 0.7685 | 15.25 | |
RF-200 | 77.26 | 0.7722 | 0.7736 | 0.7707 | 12.51 |
Code Description by Steps | Image Processing Algorithm | Image Features’ Processing Algorithm | Spectral and Statistical Features’ Processing Algorithm | Computational Complexity |
---|---|---|---|---|
Data loading | ✓ | ✓ | ✓ | |
Generating bag of features | ✓ | X | X | |
Images encoding | ✓ | X | X | |
Assigning classes to data | X | ✓ | ✓ | |
Table to array conversion | X | ✓ | ✓ | |
k-Fold cross-validation (loop, R times) | ✓ | ✓ | ✓ | |
k-Fold training and testing in each iteration | ✓ | ✓ | ✓ | |
Model training with Gaussian kernel | ✓ | ✓ | ✓ | |
Data prediction | ✓ | ✓ | ✓ | |
Performance metrics (F1-score, precision, recall, confusion matrix) calculation | ✓ | ✓ | ✓ | |
Generating ROC and PR curves | ✓ | ✓ | ✓ | |
Graphical representation and saving the generated data | ✓ | ✓ | ✓ |
Classification Data | SVM | KNN | RF-100/200 |
---|---|---|---|
Images | |||
Image-extracted features | |||
Statistical and spectral features |
Disclaimer/Publisher’s Note: The statements, opinions and data contained in all publications are solely those of the individual author(s) and contributor(s) and not of MDPI and/or the editor(s). MDPI and/or the editor(s) disclaim responsibility for any injury to people or property resulting from any ideas, methods, instructions or products referred to in the content. |
© 2025 by the authors. Licensee MDPI, Basel, Switzerland. This article is an open access article distributed under the terms and conditions of the Creative Commons Attribution (CC BY) license (https://creativecommons.org/licenses/by/4.0/).
Share and Cite
Babić, K.; Balić, M.; Begušić, D. GNSS Spoofing Detection Based on Wavelets and Machine Learning. Electronics 2025, 14, 2391. https://doi.org/10.3390/electronics14122391
Babić K, Balić M, Begušić D. GNSS Spoofing Detection Based on Wavelets and Machine Learning. Electronics. 2025; 14(12):2391. https://doi.org/10.3390/electronics14122391
Chicago/Turabian StyleBabić, Katarina, Marta Balić, and Dinko Begušić. 2025. "GNSS Spoofing Detection Based on Wavelets and Machine Learning" Electronics 14, no. 12: 2391. https://doi.org/10.3390/electronics14122391
APA StyleBabić, K., Balić, M., & Begušić, D. (2025). GNSS Spoofing Detection Based on Wavelets and Machine Learning. Electronics, 14(12), 2391. https://doi.org/10.3390/electronics14122391