Vibration Event Recognition Using SST-Based Φ-OTDR System
Abstract
:1. Introduction
2. Systems and Algorithms
2.1. Φ-OTDR System
2.2. SST Principle
2.3. Analog Signal Verification
2.4. Recognition Algorithm Flow
- (1)
- The original denoised vibration signal data are obtained and CWT is performed on the signal to obtain its wavelet transform coefficients in the frequency domain. CWT is a tool for time-frequency analysis that can transform the signal at different scales as well as translations, so as to extract its local features.
- (2)
- The instantaneous frequency of the wavelet transform coefficients is calculated when they are not zero, and SST is performed on the instantaneous frequency to obtain the SST time-frequency map. SST is an improved CWT method, which preserves the phase information of the signal and removes the interference of the cross-terms, thus improving the time-frequency resolution.
- (3)
- STFT is performed on the signal to obtain the STFT time-frequency map. STFT is a commonly used tool for time-frequency analysis, which can Fourier transform the signal under different time windows to obtain its local spectrum.
- (4)
- The time-frequency maps are categorized and organized into samples, which are classified and recognized by deep networks. In this paper, three deep networks, namely VGG, ViT, and ResNet, are used as classifiers to recognize the SST-transformed data. Deep network is a machine learning model based on multi-layer neural network, which can realize automatic feature extraction and classification of image data.
3. Experimental Analysis
3.1. Dataset
3.2. Experimental Results and Analysis
- (1)
- Accuracy (Accuracy) refers to the ratio of the number of correctly classified samples to the total number of samples: the higher, the better. It is a global metric that can reflect the performance of the model on the whole dataset, but cannot reflect the difference of the model on different categories.
- (2)
- Confusion Matrix (Confusion Matrix) is used to show the correspondence between the model’s prediction results and the actual results for each category, from which indicators such as Precision, Recall, True Rate, False Positive Rate, etc. can be calculated. It is a type of localized index, which can reflect the performance of the model on each category, as well as the ability of the model to distinguish between different categories.
- (3)
- F-value (F1-score) is the reconciled average of the check accuracy rate and the check completeness rate, which integrally reflects the precision and recall of the model: the higher, the better. It is a balanced metric that can be used to compare the performance of different models on different datasets, and is especially applicable to the case of unbalanced datasets.
3.2.1. Raw Data Identification
3.2.2. Attenuation Data Recognition
4. Conclusions
Author Contributions
Funding
Institutional Review Board Statement
Informed Consent Statement
Data Availability Statement
Acknowledgments
Conflicts of Interest
References
- Adeel, M.; Shang, C.; Zhu, K.; Lu, C. Nuisance alarm reduction: Using a correlation based algorithm above differential signals in direct detected phase-Φ-OTDR systems. Opt. Express 2019, 27, 7685–7698. [Google Scholar] [CrossRef]
- Wu, H.; Chen, J.; Liu, X.; Xiao, Y.; Wang, M.; Zheng, Y.; Rao, Y. One-dimensional CNN-based intelligent recognition of vibrations in pipeline monitoring with DAS. J. Light Technol. 2019, 37, 4359–4366. [Google Scholar] [CrossRef]
- Mahmoud, S.S.; Katsifolis, J. Elimination of rain-induced nuisance alarms in distributed fiber optic perimeter intrusion detection systems. Proc. SPIE 2009, 7316, 731604. [Google Scholar]
- Cao, C.; Fan, X.Y.; Liu, Q.W.; He, Z.Y. Practical pattern recognition system for distributed optical fiber intrusion monitoring system based on phase-sensitive coherent Φ-OTDR. In Proceedings of the Asia Communications and Photonics Conference, Kowloon, Hong Kong, 19–23 November 2015. ASu2A.145. [Google Scholar]
- Wu, H.J.; Xiao, S.K.; Li, X.Y.; Wang, Z.N.; Xu, J.W.; Rao, Y.J. Separation and determination of the disturbing signals in phase-sensitive optical time domain reflectometry (Φ-OTDR). J. Light Technol. 2015, 33, 3156–3162. [Google Scholar] [CrossRef]
- Xu, C.; Guan, J.; Bao, M.; Lu, J.; Ye, W. Pattern recognition based on time-frequency analysis and convolutional neural networks for vibrational events in Φ-OTDR. Opt. Eng. 2018, 57, 016103. [Google Scholar] [CrossRef]
- Xu, W.; Yu, F.; Liu, S.; Xiao, D.; Hu, J.; Zhao, F.; Lin, W.; Wang, G.; Shen, X.; Wang, W.; et al. Real-time multi-class disturbance detection for Φ-OTDR based on YOLO algorithm. Sensors 2022, 22, 1994. [Google Scholar] [CrossRef] [PubMed]
- Du, X.; Jia, M.; Huang, S.; Sun, Z.; Tian, Y.; Chai, Q.; Li, W.; Zhang, J. Event identification based on sample feature correction algorithm for Φ-OTDR. Meas. Sci. Technol. 2023, 34, 085120. [Google Scholar] [CrossRef]
- Grossman, A.; Morlet, J. Decomposition of hardy functions into square integrable wavelets of constant shape. SIAM J. Appl. Math. 1984, 15, 723–736. [Google Scholar] [CrossRef]
- Daubechies, I.; Lu, J.; Wu, H.-T. Synchrosqueezed wavelet transforms: An empirical mode decomposition-like tool. Appl. Comput. Harmon. Anal. 2011, 30, 243–261. [Google Scholar] [CrossRef]
- Simonyan, K.; Zisserman, A. Very deep convolutional networks for large-scale image recognition. arXiv 2014, arXiv:1409.1556. [Google Scholar]
- Dosovitskiy, A.; Beyer, L.; Kolesnikov, A.; Weissenborn, D.; Zhai, X.; Unterthiner, T.; Dehghani, M.; Minderer, M.; Heigold, G.; Gelly, S.; et al. An image is worth 16 × 16 words: Transformers for image recognition at scale. arXiv 2020, arXiv:2010.11929. [Google Scholar]
- He, K.; Zhang, X.; Ren, S.; Sun, J. Deep residual learning for image recognition. In Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition, Las Vegas, NV, USA, 27–30 June 2016; pp. 770–778. [Google Scholar]
- Wang, Z.; Lu, B.; Ye, Q.; Cai, H. Recent progress in distributed fiber acoustic sensing with Φ-OTDR. Sensors 2020, 20, 6594. [Google Scholar] [CrossRef] [PubMed]
- Xu, C.; Guan, J.; Bao, M.; Lu, J.; Ye, W. Pattern recognition based on enhanced multifeature parameters for vibration events in Φ-OTDR distributed optical fiber sensing system. Microw. Opt. Technol. Lett. 2017, 59, 3134–3141. [Google Scholar] [CrossRef]
- Sharma, S.; Guleria, K. Deep learning models for image classification: Comparison and applications. In Proceedings of the 2022 2nd International Conference on Advance Computing and Innovative Technologies in Engineering (ICACITE), Greater Noida, India, 28–29 April 2022; pp. 1733–1738. [Google Scholar]
- Raghu, M.; Unterthiner, T.; Kornblith, S.; Zhang, C.; Dosovitskiy, A. Do vision transformers see like convolutional neural networks? Adv. Neural Inf. Process. Syst. 2021, 34, 12116–12128. [Google Scholar]
- Sha, Z.; Feng, H.; Shi, Y.; Zhang, W.; Zeng, Z. Phase-sensitive Φ-OTDR with 75-km single-end sensing distance based on RP-EDF amplification. IEEE Photonics Technol. Lett. 2017, 29, 1308–1311. [Google Scholar] [CrossRef]
Event Type | Training Set/Test Set | Label |
---|---|---|
TFF | 256/64 | 0 |
TNF | 256/64 | 1 |
TSF | 256/64 | 2 |
TFLF | 256/64 | 3 |
TNLF | 256/64 | 4 |
TSLF | 256/64 | 5 |
Event Type | Training Set/Test Set | Label |
---|---|---|
CH | 256/64 | 6 |
HSZ | 256/64 | 7 |
MI | 256/64 | 8 |
PV | 256/64 | 9 |
ST | 256/64 | 10 |
WA | 256/64 | 11 |
Classification Method | Six Intensities | All | ||||
---|---|---|---|---|---|---|
Accuracy% | F1-Score | Time/s | Accuracy% | F1-Score | Time/s | |
STFT VGG | 94.79 | 0.95 | 0.28 | 88.28 | 0.89 | 0.29 |
CWT VGG | 96.61 | 0.97 | 0.28 | 91.15 | 0.91 | 0.29 |
SST VGG | 97.66 | 0.98 | 0.29 | 94.14 | 0.94 | 0.29 |
STFT ViT | 96.88 | 0.97 | 0.05 | 91.28 | 0.91 | 0.05 |
CWT ViT | 97.14 | 0.97 | 0.05 | 91.93 | 0.92 | 0.05 |
SST ViT | 97.40 | 0.97 | 0.05 | 93.10 | 0.93 | 0.05 |
STFT ResNet | 96.61 | 0.97 | 0.09 | 92.32 | 0.93 | 0.09 |
CWT ResNet | 98.18 | 0.98 | 0.09 | 94.66 | 0.95 | 0.09 |
SST ResNet | 99.48 | 0.99 | 0.09 | 96.74 | 0.97 | 0.09 |
Classification Method | Six Intensities | All | ||||
---|---|---|---|---|---|---|
Accuracy% | F1-Score | Time/s | Accuracy% | F1-Score | Time/s | |
STFT VGG | 93.23 | 0.93 | 0.29 | 84.90 | 0.85 | 0.28 |
CWT VGG | 95.57 | 0.96 | 0.28 | 86.72 | 0.87 | 0.28 |
SST VGG | 96.35 | 0.96 | 0.29 | 93.10 | 0.93 | 0.28 |
STFT ViT | 93.49 | 0.94 | 0.05 | 90.23 | 0.91 | 0.05 |
CWT ViT | 94.27 | 0.94 | 0.05 | 91.80 | 0.92 | 0.05 |
SST ViT | 96.09 | 0.96 | 0.05 | 92.97 | 0.93 | 0.05 |
STFT ResNet | 96.35 | 0.96 | 0.09 | 91.54 | 0.92 | 0.09 |
CWT ResNet | 97.14 | 0.97 | 0.09 | 94.40 | 0.94 | 0.09 |
SST ResNet | 98.70 | 0.99 | 0.09 | 95.18 | 0.95 | 0.09 |
Disclaimer/Publisher’s Note: The statements, opinions and data contained in all publications are solely those of the individual author(s) and contributor(s) and not of MDPI and/or the editor(s). MDPI and/or the editor(s) disclaim responsibility for any injury to people or property resulting from any ideas, methods, instructions or products referred to in the content. |
© 2023 by the authors. Licensee MDPI, Basel, Switzerland. This article is an open access article distributed under the terms and conditions of the Creative Commons Attribution (CC BY) license (https://creativecommons.org/licenses/by/4.0/).
Share and Cite
Yao, R.; Li, J.; Zhang, J.; Wei, Y. Vibration Event Recognition Using SST-Based Φ-OTDR System. Sensors 2023, 23, 8773. https://doi.org/10.3390/s23218773
Yao R, Li J, Zhang J, Wei Y. Vibration Event Recognition Using SST-Based Φ-OTDR System. Sensors. 2023; 23(21):8773. https://doi.org/10.3390/s23218773
Chicago/Turabian StyleYao, Ruixu, Jun Li, Jiarui Zhang, and Yinshang Wei. 2023. "Vibration Event Recognition Using SST-Based Φ-OTDR System" Sensors 23, no. 21: 8773. https://doi.org/10.3390/s23218773
APA StyleYao, R., Li, J., Zhang, J., & Wei, Y. (2023). Vibration Event Recognition Using SST-Based Φ-OTDR System. Sensors, 23(21), 8773. https://doi.org/10.3390/s23218773