Next Article in Journal
Experimental Direct Measurement of the Relative Entropy of Coherence
Previous Article in Journal
Optical Tunable Frequency-Doubling OEO Using a Chirped FBG Based on Orthogonally Polarized Double Sideband Modulation
 
 
Font Type:
Arial Georgia Verdana
Font Size:
Aa Aa Aa
Line Spacing:
Column Width:
Background:
Communication

Research on Multi-Source Simultaneous Recognition Technology Based on Sagnac Fiber Optic Sound Sensing System

1
The Key Laboratory of Electromagnetic Technology and Engineering, China West Normal University, Nanchong 637000, China
2
The Institute of Xi’an Aerospace Solid Propulsion Technology, Xi’an 710025, China
3
College of Physics and Electronic Information Engineering, Neijiang Normal University, Neijiang 641100, China
4
The Key Laboratory of Optoelectronic Technology & System, Education Ministry of China, Chongqing University, Chongqing 400044, China
*
Author to whom correspondence should be addressed.
Photonics 2023, 10(9), 1003; https://doi.org/10.3390/photonics10091003
Submission received: 25 July 2023 / Revised: 29 August 2023 / Accepted: 30 August 2023 / Published: 1 September 2023

Abstract

:
To solve the problem of multiple sound recognition in the application of Sagnac optical fiber acoustic sensing system, a multi-source synchronous recognition algorithm was proposed, which combined the VMD (variational modal decomposition) algorithm and MFCC (Mel-frequency cepstral coefficient algorithm) algorithm to pre-process the photoacoustic sensing signal, and uses BP neural network to recognize the photoacoustic sensing signal. The modal analysis and feature extraction theory of photoacoustic sensing signal based on the VMD and MFCC algorithms were presented. The signal recognition theory analysis and system recognition program design were completed based on the BP neural network. Signal acquisition of different sounds and verification experiments of the recognition system have been carried out in a laboratory environment based on the Sagnac fiber optic sound sensing system. The experimental results show that the proposed optical fiber acoustic sensing signal recognition algorithm has a simultaneous recognition rate better than 96.5% for six types of sounds, and the optical acoustic signal recognition takes less than 5.3 s, which has the capability of real-time sound detection and recognition, and provides the possibility of further application of the Sagnac-based optical fiber acoustic sensing system.

1. Introduction

In recent years, distributed fiber optic sensing technology has become a hot spot in the field of sensing technology research, because of its advantages of good environmental tolerance, anti-electromagnetic interference, and ease to realize long-distance and wide-range monitoring. Distributed fiber optic sensing technology is widely used in the fields of perimeter security [1], intrusion detection [2,3], and so on.
Compared with other fiber optic sensors, Sagnac interference-based fiber optic sensing technology has been widely explored by researchers because of its higher signal-to-noise ratio, higher sensitivity, and better adaptability to harsh environments [4,5,6]. In the research of a Sagnac fiber optic sensing system, the real-time online identification of the detection signal is an important research direction. Bao et al. used VMD algorithm to improve the recognition accuracy of Sagnac fiber optic perimeter security system for intrusion signals [7]; Wang et al. used an ESN (echo state network)-based intrusion signal identification method to accurately identify different types of intrusion signals in fiber optic perimeter security systems [8]; Ren et al. proposed a high-performance railroad perimeter security system, which based on an online TDM-FFPI (time-division multiplexed fiber-optic Fabry–Perot interferometric) sensor array, with an average recognition rate of 94.5% for four types of intrusion signals [9]; Li et al. proposed a novel, generalized DAS (distributed acoustic sensing) identification framework to be deployed on high-speed railroads for real-time intrusion threat detection with an 85.6% recognition rate [2]; Chen et al. applied the SMS (Single mode-Multimode-Single mode) fiber optic structure to an intrusion detection system, thus enabling the effective identification of man-made and natural events in the area perimeter. The above research provides solutions for intrusion signal recognition in a variety of application scenarios. However, research on multi-target identification for Sagnac fiber optic sensing systems deployed in harsh environments such as plateaus border security has not been reported. Therefore, in this paper, a multi-target online recognition algorithm study is carried out for the linear Sagnac fiber optic sound sensing system [10,11].

2. Principle

2.1. Sagnac Interference-Based Fiber Optic Acoustic Sensing Principle

As shown in Figure 1, the linear Sagnac fiber optic acoustic sensing system reduced the length of the non-sensing signal transmission fiber by nearly half. It also reduced the interference of environmental noise to the system at the source. In the linear Sagnac interferometric optical path, the light emitted by a SLD (super-luminescent light-emitting diode) the optical path from port 1 of 3 × 3 coupler C1, and is split by C1 and output from ports 5 CCW (counterclockwise) and 6 CW(clockwise), respectively. The light output from port 6 arrives at 2 × 1 coupler C2 after delayed fiber ring L1, and the light output from port 5 directly enters C2. The two beams of light pass through C2 propagate independently and arrive at port 1 of 1 × 2 coupler C3 after pickup fiber ring L2, respectively. Then, the beams are re-entered from another port when it is output from port 2 or 3 of C3 and exits from port 1 and returns along the original optical path. Eventually, the two beams interfere at C1, and the interfering light is output to the PD (photoelectric detector) via the 3rd port of C1.
When the sound field is applied to the pickup L2 of the linear Sagnac fiber optic sound sensing system, the light from the different optical paths entering the sensing system all pass through the pickup twice; therefore, the phase sensitivity of the system can be expressed as follows [10]:
Δ φ = 8 π n k L 2 λ sin n π L 1 f c P
where ∆φ denotes the phase difference between the two beams of coherent light under the action of the sound field, n denotes the refractive index of the optical fiber, f denotes the frequency of the sound, k is a constant influenced by the refractive index of the fiber, the modulus of elasticity of the pick-up structure and the optical fiber bounce coefficient, L1 is the length of the delay fiber ring, L2 is the length of the sensing fiber ring, λ is the wavelength of light, c is the speed of light in a vacuum and P is the sound pressure acting on the pick-up structure.
In the linear Sagnac fiber optic sound sensing system, the interferometric light intensity obtained from the detector can be expressed as follows [10]:
I = 1 9 1 + cos ( Δ φ + Δ ψ )
where Δψ is the non-reciprocal phase shift introduced by the 3 × 3 coupler, and Δψ = 2π/3. Bringing Equation (1) into Equation (2), the following is obtained:
I = 1 9 I 0 1 + cos 8 π n k L 2 λ P sin n π L 1 f c + Δ ψ
In the linear Sagnac fiber optic sound sensing system, the photodetector receives the interferometric light signal carrying sound information and converts it into a current signal. Through the high-speed data acquisition circuit based on the FPGA (Field-programmable gate array) chip, the I-V conversion, amplification and filtering, analogue-to-digital conversion and data storage of the broadband signal are completed to obtain the digital optical sound sensing signal, and finally the recognition of the sound signal is completed through the characterization of the optical sound sensing signal.

2.2. Optical and Acoustic Signal Recognition Algorithm Design

The block diagram of the linear Sagnac fiber optic sound sensing system for sound signal recognition is shown in Figure 2. The photoacoustic signals collected by the Sagnac fiber optic sound sensing system are preprocessed into the signal recognition system. Firstly, the optical sound signal preprocessing is completed and sufficient sound signals are selected as training signals for feature extraction using the VMD algorithm [12,13] and MFCC [14]. The extracted features are used in a classification model based on BP neural network to complete the signal classification recognition and output the sound recognition results to be detected.
In the signal preprocessing stage, the VMD algorithm is used to complete the signal decomposition [15,16,17]. It achieves the signal bandwidth and minimization of each mode, and keeps the decomposed modes consistent with the original signal. VMD-based modal decomposition increases the number of dimensions of the signal and facilitates the extraction of more signal features. On the one hand, it increases the amount of training data; on the other hand, the most important feature components of the signal features can be extracted comparing with the original signal, thus reducing the degree of interference of secondary components to the target classification and greatly improving the classification accuracy.
During signal feature extraction using the VMD algorithm, the IMF (intrinsic mode function) is expressed as follows [15]:
I n ( t ) = A n ( t ) cos ( ϕ n ( t ) )
where ϕ n ( t ) is the phase of I n ( t ) , and ϕ n ( t ) 0 ; A n ( t ) is the instantaneous amplitude of I n ( t ) , and A n ( t ) 0 ; ω n ( t ) is the instantaneous frequency of I n ( t ) , and ω n ( t ) = ϕ n ( t ) ; A n ( t ) and ω n ( t ) change more slowly than the phase ϕ n ( t ) ; and I n ( t ) is approximately a harmonic signal of amplitude A n ( t ) and frequency ω n ( t ) .
In the process of iteratively solving the variational model, the center frequencies and bandwidths of the IMF components are continuously updated. Based on the frequency domain characteristics of the signal, the frequency band of the signal is adaptively partitioned to obtain multiple narrowband IMF components. The original signal is decomposed into n IMF classifications via VMD, and the corresponding constrained variational model is as follows:
{ I n } , { ω n } min { n t [ ( δ ( t ) + j π t ) I n ( t ) e j ω n t ] 2 2 } s . t . n I n = f
where I n = I 1 , I 2 , , I n   is the n IMF components decomposed using the VMD method, ω n = ω 1 , ω 2 , , ω n   is the center frequency of each IMF component and δ ( t ) is the unit impulse function.
To transform the constrained variational problem into an unconstrained variational problem, a quadratic penalty term α and a Lagrange multiplier λ , are introduced in Equation (5) to obtain the Lagrange expression as follows:
L ( { I n } , { ω n } , λ ) = α n t [ ( δ ( t ) + j π t ) * I n ( t ) e j ω n t ] 2 2 + f I n 2 2 + λ , f I n
The optimal solution of Equation (6) is obtained using the alternating direction method of the multiplicative operator to obtain the n narrowband IMF components. According to the above principle, the photoacoustic signal is decomposed into n IMF signals via the VMD algorithm. In this paper, the decomposition of the VMD algorithm is better when the value of n is determined as 6, i.e., the VMD algorithm decomposes the signal into six different modes.
After the signal modal decomposition using the VMD algorithm, the MFCC [18,19] algorithm, which has good robustness and is not easily disturbed by fluctuations in signal-to-noise ratio, was used to perform feature extraction of the signal. In the process of signal feature extraction using MFCC, the Mel frequency is introduced to convert the non-linearity of sound sensitivity to a linearized description, and the conversion relationship between the Mel frequency and the actual frequency is as follows [20]:
m = 2595 log 10 ( 1 + f 700 )
where m is the Mel frequency and f is the actual frequency. The relationship between the Mel frequency and the actual linear frequency is shown in Figure 3a.
As shown in Figure 3b, MFCC is based on the critical bandwidth size from dense to sparse, setting up Meier filters from low to high frequencies, and filtering the input signal to obtain the output signal energy, which will be used as the basic features of the signal. The MFCC parameters in the signal feature extraction process can be expressed as follows [18]:
d t = C t + 1 C t , t < k k = 1 K k ( C t + k C t k ) 2 k = 1 K k 2 C t C t 1 , t Q K
where dt denotes the t-th 1st order difference and Ct denotes the t-th standard MFCC parameter; k denotes the time difference of the 1st order derivative, and in the process of programming, k = 1 is usually taken; Q denotes the order of the MFCC parameter. The MFCC parameters used in this paper consist of the static MFCC parameters of the photoacoustic signal, the first-order difference, and the second-order difference MFCC parameters [21], which can be calculated by substituting the calculation results from the above equation into Equation (8) again.
The characteristic parameters of the photoacoustic signal obtained via MFCC are identified using a BP neural network [22]. The topology of the neurons in the BP neural network is represented as follows:
y n = f ( i = 0 n ω i × x i )
The Sigmoid function is used as the activation function in the recognition procedure, and the expression of the Sigmoid function is shown in Equation (10), as follows:
f ( x ) = 1 1 + e x
The mapping of any m-dimension to n-dimension is achieved by a three-layer BP neural network, the structure of which is shown in Figure 4.
The number of nodes in the input and output layers in a BP network is determined as m and n, respectively; the number of nodes in the hidden layer is l. The following relationship is generally satisfied.
l = m + n + c
where c is the regulation parameter, which is taken as 1 to 10 in this paper.
The BP neural network modelling process involves two stages of forward information transfer and reverse error transfer [23,24,25].
Positive information transfer process
The forward pass is the input mode, which is passed from the input layer to the output layer via the implicit layer processing. Let the output value of the i-th node at layer m be y i m , the threshold value be θ i , the activation value be S i , the activation function f be a Sigmoid function, and the connection weight between this node and the j-th node at layer m − 1 be ω i j , as shown in Equation (12).
S i = j = 0 N m 1 ω i j y j m 1 y i m = f ( S i )
The forward pass process calculates the output of each network node in turn according to Equation (12).
Reverse error transfer process
The process of adjusting the weights and thresholds of the network is carried out so that the output value approximates the desired value, which is based on the rule of gradient most rapid descent, i.e., adjusting the weights and thresholds along the direction of the most rapid descent of the squared relative error. The output error function of the BP neural network is
E ( ω , b ) = 1 2 i = 0 n 1 ( d i y i ) 2
In Equation (13), d denotes the output layer output result and y denotes the expected value.
The adjustment process for weights and thresholds can be expressed by the following equation:
ω i j = ω i j η 1 E ( ω , b ) ω i j b i = b i η 2 E ( ω , b ) b i
In Equation (14), η 1 is the weight learning efficiency and η 2 is the threshold learning efficiency. Each node is adjusted in the BP neural network according to Equation (14), and the reverse transfer process is controlled by setting the error accuracy and the number of iterations. The flow chart of photoacoustic signal recognition based on BP neural network is shown in Figure 5.
During model training, the training samples are input to the initialized BP neural network, and the output value and the expected error value E are obtained, through implicit layer processing and output layer output. When the output results meet the accuracy requirements or the number of iterations reaches the specified number, the BP neural network modelling is completed and the recognition and classification function of the test signal is realized.

3. Experiment

LabVIEW is used to build a virtual instrument platform to complete the acquisition and storage of signals. Feature extraction of photoacoustic signals based on VDM and MFCC algorithms is carried out using Matlab 2018b, and the classification and identification of signals is completed by designed BP neural network algorithms.
In this paper, six types of sounds are selected as experimental test sound signals: small helicopter, Boeing aircraft, Hummer, gale, quadrotor UAV and fixed-wing UAV. Figure 6 shows the Sagnac-based photoacoustic sensing system acquisition completed with sound signal detection, and the photoacoustic sensing signal obtained. In the experimental test, six different sound signals were collected. For every sound, five groups of signals were collected, and every group contains 100 signals. Thus, the total number of response signals were 3000. During the experiments, 90% of the signals from each group were randomly selected as training samples and the remaining 10% were used as test samples to verify the accuracy of the recognition algorithm. The number of samples used for training was 2700 and the number of samples used for testing was 300, and the training of the recognition model, the accuracy of the recognition system and the testing of the recognition time were completed, respectively.
As shown in Figure 7a, the training accuracy tended to be stable and converged around 93% after 150 rounds. The test accuracy also fluctuated up and down 93%, while the curve fluctuation is slightly greater than training curve. These results show that the training accuracy and test accuracy remained consistent. The loss curves are shown in Figure 7b. The training loss curve after 150 rounds tended to be steady and converged around 0.1 and 0.2, respectively, while the test loss curve also tended to be steady after 150 rounds, and the test loss gradually stabilized at about 0.2. The training loss and test loss were kept consistent too.
As shown in Table 1, the BP neural network achieved high recognition rates for the six sounds, with 100% accuracy for the small drones in trials 1 and 2; the third trial achieved 100% identification accuracy on Quadrotor UAV; the fourth trial achieved 100% accuracy in the identification of fixed-wing UAVs. The lowest recognition accuracy was the fourth recognition of Hummer with an accuracy of 90.91%, while the rest of the tests were above 92%, with an average accuracy of 96.50% for the five experiments. As can be seen from Table 2, the average training time of the BP neural network is 42 s, and the average recognition time is 5.3 s, with the ability to achieve real-time monitoring of intrusion disturbances.
Specific analysis of the first set of experiments in Table 1, using 1, 2, 3, 4, 5 and 6 as labels for small helicopters, Boeing aircraft, Hummer, the wind, quadrotor UAV and fixed-wing drones, respectively, the 250 randomly selected test sets in the first set of experiments contained 41 sets of categories 1, 47 sets of categories 2, 56 sets of category 3, 34 sets of category 4, 39 sets of category 5 and 33 sets of category 6. The test sets were classified and identified, and the results of the BP neural network recognition of the six sound signals were analyzed as shown in Figure 8.
As shown in Figure 8, the recognition results of the BP neural network for 250 sets of test samples showed nine recognition errors, where the predicted sounds did not match the actual sounds. To show the statistical results of the misclassification more intuitively, the labels of the misclassified data and their original information were summarized.
As shown in Table 3, of the nine false identifications for this experiment, there were three false identifications for the Hummer, two false predictions each for the Boeing and quadcopter UAVs, and one false prediction each for the Gale and fixed-wing UAVs, for an overall false alarm rate of less than 3.6% for the recognition system. The causes of the recognition errors are the small sample size of the BP neural network input data and the lack of optimization of the sound signal feature extraction algorithm.

4. Conclusions

Based on the Sagnac optical fiber acoustic sensing system, a feature extraction algorithm based on the fusion of the VMD algorithm, the MFCC algorithm and a BP neural network classification recognition network was proposed, as well as a multi-target recognition system for optical acoustic signals. Simultaneous multi-target recognition experiments were completed for six types of sound signals including small helicopters, Boeing aircraft, Hummer, the wind, quadrotor UAV and fixed wing drones. A total of 3000 sets of data were tested in the experiment; 2700 sets of measurement signals were randomly selected as training samples for training the neural network, and the remaining 300 sets were used as test samples to verify the recognition accuracy. The experimental results show that the accuracy of the BP neural network algorithm is better than 96.5% for the six classification recognition of the response signals, and the recognition time of the photoacoustic signal is less than 5.2 s. In the future, studies need to focus on increasing the number of training samples and optimizing the feature extraction algorithm to further improve the recognition accuracy of the system.

Author Contributions

Conceptualization, J.C. and N.W.; methodology, X.Z. and J.C.; software, X.Z. and J.W.; validation, X.Z., R.T. and J.W.; formal analysis, J.C. and R.T.; investigation, J.R. and C.L.; resources, Y.Z. and C.L.; data curation, J.W. and X.Z.; writing—original draft preparation, X.Z. and R.T.; writing—review and editing, J.C. and N.W.; visualization, X.Z. and R.T.; supervision, Y.Z.; project administration, Y.Z.; funding acquisition, N.W., J.C. and C.L. All authors have read and agreed to the published version of the manuscript.

Funding

This research was funded by the National Natural Science Foundation of China, grant number 51875067, “Sichuan Science and Technology Program, grant number 2021yj0541” and “Natural Science Foundation of Sichuan Province, grant number 2022NSFSC0525”.

Institutional Review Board Statement

Not applicable.

Informed Consent Statement

Not applicable.

Data Availability Statement

Not applicable.

Conflicts of Interest

The authors declare no conflict of interest.

References

  1. Lyu, C.; Jiang, J.; Li, B.; Huo, Z.; Yang, J. Abnormal events detection based on RP and inception network using distributed optical fiber perimeter system. Opt. Lasers Eng. 2020, 137, 106377. [Google Scholar] [CrossRef]
  2. Li, Z.; Zhang, J.; Wang, M.; Zhong, Y.; Peng, F. Fiber distributed acoustic sensing using convolutional long short-term memory network: A field test on high-speed railway intrusion detection. Opt. Express 2020, 28, 2925–2938. [Google Scholar] [CrossRef]
  3. Zhan, Y.; Song, Z.; Sun, Z.; Yu, M.; Guo, A.; Feng, C.; Zhong, J. A distributed optical fiber sensor system for intrusion detection and location based on the phase-sensitive OTDR with remote pump EDFA. Optik 2020, 225, 165020. [Google Scholar] [CrossRef]
  4. Zhang, G.; Zhang, W.; Gui, L.; Li, S.; Fang, S.; Zuo, C.; Wu, X.; Yu, B. Ultra-sensitive high temperature sensor based on a PMPCF tip cascaded with an ECPMF Sagnac loop. Sens. Actuators A Phys. 2020, 314, 112219. [Google Scholar] [CrossRef]
  5. Petrie, C.M.; McDuffee, J.L. Liquid level sensing for harsh environment applications using distributed fiber optic temperature measurements. Sens. Actuators A Phys. 2018, 282, 114–123. [Google Scholar] [CrossRef]
  6. Xu, J.; Tang, X.; Xin, L.; Sun, Z.; Ning, T. High sensitivity magnetic field sensor based on hybrid fiber interferometer. Opt. Fiber Technol. 2023, 78, 103321. [Google Scholar] [CrossRef]
  7. Bao, J.; Mo, J.; Xu, L. VMD-based vibrating fiber system intrusion signal recognition. Opt.—Int. J. Light Electron Opt. 2020, 205, 163753. [Google Scholar] [CrossRef]
  8. Wang, N.; Fang, N.; Wang, L. Intrusion recognition method based on echo state network for optical fiber perimeter security systems. Opt. Commun. 2019, 451, 301–306. [Google Scholar] [CrossRef]
  9. Ren, Z.; Yao, J.; Huang, Y. High-performance railway perimeter security system based on theinline time-division multiplexing fiber Fabry-Perot interferometricsensor array. Opt.—Int. J. Light Electron Opt. 2022, 249, 168191. [Google Scholar] [CrossRef]
  10. Wang, J.; Tang, R.; Chen, J.; Wang, N.; Zhu, Y.; Zhang, J.; Ruan, J. Study of Straight-Line-Type Sagnac Optical Fiber Acoustic Sensing System. Photonics 2023, 10, 83. [Google Scholar] [CrossRef]
  11. Chen, J.; Wang, J.; Wang, N. An Improved Acoustic Pick-Up for Straight Line-Type SagnacFiber Optic Acoustic Sensing System. Sensors 2022, 22, 8193. [Google Scholar] [CrossRef]
  12. Chen, P.; You, C.; Ding, P. Event classification using improved salp swarm algorithm based probabilistic neural network in fiber-optic perimeter intrusion detectionsystem. Opt. Fiber Technol. 2020, 56, 102182. [Google Scholar] [CrossRef]
  13. Dragomiretskiy, K.; Zosso, D. Variational mode decomposition. IEEE Trans. Signal Process. 2014, 62, 531–544. [Google Scholar] [CrossRef]
  14. Liu, B.; Jiang, Z.; Nie, W.; Ran, Y.; Lin, H. Research on leak location method of water supply pipeline based on negative pressure wave technology and VMD algorithm. Measurement 2021, 186, 110235. [Google Scholar] [CrossRef]
  15. Chen, S.; Li, Y.; Huang, L.; Yin, H.; Zhang, J.; Song, Y.; Wang, M. Vehicle identification based on Variational Mode Decomposition in phase sen-sitive optical time-domain reflectometer. Opt. Fiber Technol. 2020, 60, 102374. [Google Scholar] [CrossRef]
  16. Liu, K.; Sun, Z.; Jiang, J.; Ma, P.; Wang, S.; Weng, L.; Xu, Z.; Liu, T. A Combined Events Recognition Scheme Using Hybrid Features in Distributed Optical Fiber Vibration Sensing System. IEEE Access 2019, 7, 105609–105616. [Google Scholar] [CrossRef]
  17. Zhao, Z.; Wang, H.; Huang, Y.; Yao, H.; Li, N.; Tan, H.; Liu, Y. Research on non-invasive load identification method based on VMD. Energy Rep. 2023, 9, 460–469. [Google Scholar] [CrossRef]
  18. Anwar, M.Z.; Kaleem, Z.; Jamalipour, A. Machine Learning Inspired Sound-Based Amateur Drone Detection for Public Safety Applications. IEEE Trans. Veh. Technol. 2019, 68, 2526–2534. [Google Scholar] [CrossRef]
  19. Ghaffar, M.S.B.A.; Khan, U.S.; Iqbal, J.; Rashid, N.; Hamza, A.; Qureshi, W.S.; Tiwana, M.I.; Izhar, U. Improving classification performance of four class FNIRS-BCI using Mel Frequency Cepstral Coefficients (MFCC). Infrared Phys. Technol. 2020, 112, 103589. [Google Scholar] [CrossRef]
  20. Ks, D.R.; Rudresh, G.S. Comparative performance analysis for speech digit recognition based on MFCC and vector quantization. Glob. Transit. Proc. 2021, 2, 513–519. [Google Scholar] [CrossRef]
  21. Zhang, H.; Mcloughlin, I.; Yan, S. Robust sound event classification using deep neural networks. IEEE/ACM Transactions on Audio. Speech Lang. Process. (TASLP) 2015, 23, 540–552. [Google Scholar]
  22. Liu, L.; Lu, P.; Liao, H. Fiber-optic michelson interferometric acoustic sensor based on a PP/PET diaphragm. IEEE Sens. J. 2016, 16, 3054–3058. [Google Scholar] [CrossRef]
  23. Song, S.; Xiong, X.; Wu, X. Modeling the SOFC by BP neural network algorithm. Int. J. Hydrogen Energy 2021, 46, 20065–20077. [Google Scholar] [CrossRef]
  24. Li, B.; Shen, L.; Zhao, Y.; Yu, W.; Lin, H.; Chen, C.; Li, Y.; Zeng, Q. Quantification of interfacial interaction related with adhesive membrane fouling by genetic algorithm back propagation (GABP) neural network. J. Colloid Interface Sci. 2023, 640, 110–120. [Google Scholar] [CrossRef]
  25. Muruganandam, S.; Joshi, R.; Suresh, P.; Balakrishna, N.; Kishore, K.H.; Manikanthan, S.V. A deep learning-based feed forward artificial neural network to predict the K-barriers for intrusion detection using a wireless sensor network. Meas. Sens. 2023, 25, 100613. [Google Scholar] [CrossRef]
Figure 1. Linear Sagnac fiber optic acoustic sensor.
Figure 1. Linear Sagnac fiber optic acoustic sensor.
Photonics 10 01003 g001
Figure 2. Block diagram of the sound recognition system.
Figure 2. Block diagram of the sound recognition system.
Photonics 10 01003 g002
Figure 3. (a) Plot of Mel frequency versus linear frequency. (b) Diagram of Mel filter bank setup.
Figure 3. (a) Plot of Mel frequency versus linear frequency. (b) Diagram of Mel filter bank setup.
Photonics 10 01003 g003
Figure 4. Structure of three-layer BP neural network.
Figure 4. Structure of three-layer BP neural network.
Photonics 10 01003 g004
Figure 5. Flow chart of BP neural network sound recognition.
Figure 5. Flow chart of BP neural network sound recognition.
Photonics 10 01003 g005
Figure 6. Response of the system to different sound signals.
Figure 6. Response of the system to different sound signals.
Photonics 10 01003 g006
Figure 7. (a) Accuracy curves, (b) loss curves.
Figure 7. (a) Accuracy curves, (b) loss curves.
Photonics 10 01003 g007
Figure 8. (a) BP neural network sound recognition error map, (b) BP neural network sound recognition results graph.
Figure 8. (a) BP neural network sound recognition error map, (b) BP neural network sound recognition results graph.
Photonics 10 01003 g008
Table 1. BP neural network sound signal recognition accuracy.
Table 1. BP neural network sound signal recognition accuracy.
Number of ExperimentSmall Helicopter (%)Boeing Aircraft (%)Hummer (%)The Wind (%)Quadrotor UAV (%)Fixed-Wing Drone (%)Average Accurancy (%)
10095.7494.6497.0694.8796.9796.40
10094.2997.592.3197.6297.4496.53
96.3393.1096.5595.4510097.4496.31
94.2395.7490.9194.1297.2210096.00
95.4597.9296.0292.5997.6297.8397.28
Table 2. BP neural network recognition time.
Table 2. BP neural network recognition time.
First Time/sSecond Time/sThird Time/sFourth Time/s
BP neural network training40434342
BP neural network recognition6555.3
Table 3. Statistical table of misidentifications.
Table 3. Statistical table of misidentifications.
Data NumberActual LabelPredictive LabelData NumberActual LabelPredictive Label
33HummerThe wind154Boeing aircraftHummer
42Fixed-wing
drone
Small helicopter172HummerBoeing aircraft
61HummerThe wind176The windHummer
100Quadrotor UAVThe wind249Quadrotor UAVThe wind
151Boeing aircraftThe wind
Disclaimer/Publisher’s Note: The statements, opinions and data contained in all publications are solely those of the individual author(s) and contributor(s) and not of MDPI and/or the editor(s). MDPI and/or the editor(s) disclaim responsibility for any injury to people or property resulting from any ideas, methods, instructions or products referred to in the content.

Share and Cite

MDPI and ACS Style

Zheng, X.; Tang, R.; Wang, J.; Lin, C.; Chen, J.; Wang, N.; Zhu, Y.; Ruan, J. Research on Multi-Source Simultaneous Recognition Technology Based on Sagnac Fiber Optic Sound Sensing System. Photonics 2023, 10, 1003. https://doi.org/10.3390/photonics10091003

AMA Style

Zheng X, Tang R, Wang J, Lin C, Chen J, Wang N, Zhu Y, Ruan J. Research on Multi-Source Simultaneous Recognition Technology Based on Sagnac Fiber Optic Sound Sensing System. Photonics. 2023; 10(9):1003. https://doi.org/10.3390/photonics10091003

Chicago/Turabian Style

Zheng, Xinyu, Ruixi Tang, Jiang Wang, Cheng Lin, Jianjun Chen, Ning Wang, Yong Zhu, and Juan Ruan. 2023. "Research on Multi-Source Simultaneous Recognition Technology Based on Sagnac Fiber Optic Sound Sensing System" Photonics 10, no. 9: 1003. https://doi.org/10.3390/photonics10091003

Note that from the first issue of 2016, this journal uses article numbers instead of page numbers. See further details here.

Article Metrics

Back to TopTop