Next Article in Journal
Neutron and Gamma-Ray Detection System Coupled to a Multirotor for Screening of Shipping Container Cargo
Next Article in Special Issue
Automated Laser-Fiber Coupling Module for Optical-Resolution Photoacoustic Microscopy
Previous Article in Journal
Precision Low-Cost Compact Micro-Displacement Sensors Based on a New Arrangement of Cascaded Levers with Flexural Joints
 
 
Font Type:
Arial Georgia Verdana
Font Size:
Aa Aa Aa
Line Spacing:
Column Width:
Background:
Article

A Speech Enhancement Algorithm for Speech Reconstruction Based on Laser Speckle Images

1
Graduate Department, Wuhan Research Institute of Posts and Telecommunications, Wuhan 430074, China
2
Institute of Information Engineering, Chinese Academy of Sciences, Beijing 100093, China
3
School of Cyber Security, University of Chinese Academy of Sciences, Beijing 100049, China
*
Author to whom correspondence should be addressed.
Sensors 2023, 23(1), 330; https://doi.org/10.3390/s23010330
Submission received: 16 August 2022 / Revised: 2 September 2022 / Accepted: 3 September 2022 / Published: 28 December 2022
(This article belongs to the Special Issue Photoacoustic Imaging and Sensing)

Abstract

:
In the optical system for reconstructing speech signals based on laser speckle images, the resonance between the sound source and nearby objects leads to frequency response problem, which seriously affects the accuracy of reconstructed speech. In this paper, we propose a speech enhancement algorithm to reduce the frequency response. The results show that after using the speech enhancement algorithm, the frequency spectrum correlation coefficient between the reconstructed sinusoidal signal and the original sinusoidal signal is improved by up to 82.45%, and the real speech signal is improved by up to 56.40%. This proves that the speech enhancement algorithm is a valuable tool for solving the frequency response problem and improving the accuracy of reconstructed speech.

1. Introduction

In the field of information security and the prevention of social crimes, covertly acquiring clear remote speech signals has become an important research topic [1,2]. For years, researchers have proposed a variety of methods for measuring vibration signals [3,4,5,6,7,8,9,10,11], many of which are used in speech signal reconstruction and are based on laser. There are two main methods based on remote vibration signals detection: one is based on laser Doppler vibrometer (LDV) [3,4,5,6,7,8] and the other is optical sensing and image processing technologies [9,10,11]. LDV has the advantages of non-contact, high spatial and temporal resolution, real-time processing. But LDV cannot fulfill the full field detection. The optical sensing and image processing technology is mainly divided into two types: natural light image processing [12] and coherent light image processing [13,14,15,16]. The former has simple system structure and easy to implement. However, it requires high computational cost, and its processing speed is slow. In the 1980s, researchers of [13,14] proposed the method to extract subtle motion from speckle images. Additionally, in recent years, researchers have also reconstructed speech based on speckle images [15,16]. The principle of this method is to irradiate the rough surface object near the sound source with laser, and the reflected light interferes to form secondary speckle [17]. Then the sound makes the surrounding objects vibrate slightly, which makes the speckle pattern shift slightly. The remote speech signal is reconstructed by extracting the small movement between speckle images. The system of remote speech reconstruction based on laser speckle pattern is mainly divided into two parts: the first is the construction of an optical system, which is used to collect continuous speckle image sequences; the second is the design of a reconstruction algorithm, which is used to detect movement from speckle images and reconstruct remote speech signal.
As regards optical system construction, ref. [18] proposed a set of simplified optical equipment, it reduces the cost and realizes full-field non-contact detection. This optical device makes the technology more suitable for remote monitoring. The current laser technology and imaging technology can collect high-quality speckle images at a low price.
Regarding speech reconstruction algorithm, researchers have proposed a variety of methods, including digital image correlation (DIC) [19,20,21,22,23,24], optical flow method [16,25,26], and intensity method [27,28]. Ref. [29] proposed a geometric method to explain the motion of speckle. Ncorr [20] is a digital correlation algorithm specially designed by researchers for two-dimensional images. In recent years, with the rapid development of machine learning and artificial intelligence technology, many neural-network-based methods have been proposed, such as convolutional neural network (CNN) [30,31,32] and convolutional long short-term memory (LSTM) [33].
At present, the detection method based on laser speckle images is the most suitable for this task, but the problem of frequency response has been found in the actual experiments and applications. We found that the speech signal reconstructed by this optical device [18] has inhomogeneous enhancement or attenuation of speech components at different frequencies, and the frequency response of different vibration objects is different. Any object has a natural frequency, when the vibration sound wave of this natural frequency is transmitted to the object, the vibration amplitude of the object will have the maximum growth. Different objects have different natural frequencies, so the performance of frequency response is also different. This phenomenon greatly affects the accuracy of the reconstructed sound signal.
At present, many reconstruction algorithms ignore the frequency response, fewer algorithm is proposed to solve this problem. In this paper, a speech enhancement algorithm for different vibration objects is proposed to weaken the frequency response and improve the accuracy of speech reconstruction.
This paper has two main contributions.
  • The frequency response of long-distance speech reconstruction based on laser speckle image is proposed.
  • Our algorithm is a speech enhancement algorithm designed to reduce the influence of frequency response, which greatly improves the accuracy of reconstructed speech signals.
The rest of this paper is organized as follows: Section 2 introduces the methodology of this paper, including DIC method and speech enhancement algorithm. Section 3 present the experimental setup. Section 4 introduces the experimental datasets and evaluation metrics. Section 5 is the result. Section 6 is the conclusion.

2. Methodology

2.1. Digital Image Correlation Method

There is room for improvement in many speech reconstruction algorithms. For example, the intensity method runs fast but has low accuracy. The performance of the six algorithms is compared in [34], and the results show that the cross-correlation method is one of the best choices. In order to accurately detect the frequency response in this optical device, this paper uses the DIC method as the benchmark.
Figure 1 shows the conversion between object vibration and speckle pattern displacement. In this figure, the transversal plane is regarded as the x o y plane, and the axial axis is the z -axis. The motion of this object can be divided into three directions: transverse, axial, and tilt. Ref. [18] proved that when strongly defocus the speckle image captured by the camera, only the tilt motion has the noticeable impact on the displacement of the speckle image, the influence of the other two motions on the shape and displacement of the speckle image can be ignored.
Refs. [35,36] deduced the speckle formation theory in detail, and ref [18] mentioned that making the imaging plane move from Z 1 to Z 2 . A x o , y o   is the amplitude distribution of speckle at Z 2 .
A x o , y o = e x p i ϕ x , y e x p i β x x + β y y e x p 2 π i λ Z 2 x x o + y y o d x d y
β x = 4 π t a n   α x λ
β y = 4 π t a n   α y λ
where λ is the optical wavelength and ϕ is the random phase generated by the rough surface. α x , α y are tilting angle in the x -axis and y -axis respectively.
The inverse of the magnification of the imaging system is expressed in M .
M = Z 3 F F Z 3 F
where F is the focal length of the imaging lens.
The speckles pattern displacement d is related to the tilting angle α of the object.
d = Z 2 α M
According to the previous description, we know that under appropriate conditions, there is only speckle movement between two consecutive speckle images, but no shape change. And the displacement is in the x o y plane. The relationship between two consecutive speckle images I t ,   I t + 1 is:
I t + 1 = I t x + x ,   y + y
where x , y are the relative displacements in the x -axis and the y -axis direction.
The calculation steps of DIC method are shown in Figure 2. Take two consecutive speckle images I t and I t + 1 . Calculate the correlation coefficient between two images. At the maximum correlation, the corresponding displacement is the relative displacement between two images.
Corr2 is a function of finding the correlation coefficient between two matrices. The correlation coefficient r can be expressed as:
r ( A , B ) = c o r r 2 ( A , B ) = m n A m n A ¯ B m n B ¯ m n A m n A ¯ 2 m n B m n B ¯ 2
where A and B are two-dimensional matrices of the same size, A ¯ is the mean of the matrix A , and B ¯ is the mean of the matrix B .

2.2. Speech Enhancement Algorithm

We use a sinusoidal signal which frequency varies from 80 to 1600 Hz and amplitude remains constant as the audio at the sound source (Figure 3 shows the waveform and frequency spectrum of the sinusoidal signal), and collect the corresponding speckle image dataset. Then use the algorithm in Section 2.1 to reconstruct the sinusoidal signal when the vibration object is a carton (Figure 4 shows the waveform and frequency spectrum of the reconstructed signal). By comparing Figure 3 with Figure 4, we found that the amplitude of the reconstructed signal is different at different frequencies, which seriously affects the accuracy of the reconstructed speech signal. Figure 5 is the reconstructed signal when the vibration object is a paper cup. By comparing Figure 4 with Figure 5, we found that the frequency responses of two different vibration objects are also different.
According to this phenomenon, we calculated the amplitude changes of several common vibration objects at different frequencies. According to the degree of change, the reconstructed speech signals of different frequencies are enhanced to different degrees. The above method can reduce the influence of the frequency response. The design flow of the algorithm is shown in Figure 6. To more clearly explain the steps involved in this speech enhancement algorithm, Figure 7 shows the results of each step of the speech enhancement algorithm when the vibration object is a carton.
  • Sinusoidal signal: Table 1 shows the common frequency range of speech. The sinusoidal signal which frequency varies from 80 to 1600 Hz and amplitude remains constant at 1.
  • Capture laser speckle images: Use the high-speed camera to collect the corresponding laser speckle images.
  • DIC method: Speech signals are reconstructed from speckle images using the DIC method described in Section 2.1. The discrete frequency domain sequence of the sinusoidal signal reconstructed is s f .
  • Connect local highest point: Connect the local highest points of the amplitude signal in the reconstructed speech spectrum to obtain its envelope signal. The envelope signal is named E f .
  • Enhancement signal: Equation (8) shows the calculation formula of the signal h f . Note that E f at some frequencies is very small, which will make h f very large. When h f is greater than 1000, set it to 1.
    h f = E f m a x E f
  • Real speech domain: Multiply the discrete frequency domain sequence of the reconstructed real speech r 1 f with the signal h f . The enhanced speech discrete frequency domain sequence is R f . According to the frequency range of speech, we process the speech with the frequency of 80–1400Hz.
    R f = r 1 f × h f
After the following steps, the discrete frequency domain sequence R f   of the real speech signal will be obtained, and the speech waveform after speech enhancement can be obtained by using the inverse discrete Fourier transform (Equation (10)):
r n = 1 N k = 1 N R k e 2 π i n 1 k 1 / N
where N is the total number of samples of the R sequence, R is the speech discrete frequency domain sequence after speech enhancement, and r is the speech discrete time domain sequences after speech enhancement.

3. Experimental Setup

According to previous research results, the formation and acquisition process of laser speckle image can be set (as Figure 8). Speckle is a three-dimensional ellipsoidal shape with its long axes facing the light propagation direction. Figure 9 shows the experimental platform. Figure 10 and Figure 11 are the equipment simulation diagrams of two kinds of lasers, and Figure 12 is the equipment physical map. The equipment used mainly includes:
  • The high-speed camera MVCAM AI-030U815M, with a maximum frame rate of 3200 frames per second (fps);
  • He-Ne laser (detailed parameters are shown in Table 2);
  • Fiber laser (detailed parameters are shown in Table 3);
  • Machine vision experiment frame, with fine-tuning camera clip and universal clip;
  • One personal computer (PC) with universal serial bus 3.0 (USB3.0) interface.
In this experiment, the high-speed camera is used to collect speckle images. The frame rate of the camera is closely related to the exposure interval. The process of speckle image acquisition by high-speed charge coupled device (CCD) camera can be regarded as uniform sampling of continuous speckle video. The sampling frequency f s of the high-speed camera and the highest frequency of speech f m   need to satisfy the Nyquist theorem (Equation (11)).
f s 2 f m
Equation (11) shows that the frame rate of the high-speed camera should be greater than or equal to two times of the highest frequency of speech. The data in Table 1 show the speech frequency range of male and female, and f m = 1200 Hz. This paper considers that all the frequency ranges in the speech can be restored to meet the actual use requirements, so the frame rate of the high-speed camera is greater than or equal to 2400 fps. The frame rate of the high-speed camera used in this experiment is 3200 fps, which meets the basic requirements of speech reconstruction.
In conjunction with Figure 1, we describe various physical parameters of speckle images. The first is the resolution of the speckle patterns in the Z 2 plane:
δ x = λ Z 2 D · 1 M = λ F D · Z 2 Z 3
where D is the diameter of the laser beam, Z 2 ,   Z 3 are different distances in Figure 1. F is the focal length of the lens.   λ is the optical wavelength.
The optical system has requirements for the focal length F . The size of the pixel in the detector is Δ s . It is assumed that every speckle in this plane will be observed at least by K pixels. The requirements for F are as follows:
F = K s Z 3 D Z 2 λ
The distance Z 2 needs to be satisfied:
Z 2 > D 2 4 λ
Finally, the number of speckles in every dimension of the spot is N :
N = ϕ M δ x = ϕ D λ Z 2 = F · D F # λ Z 2
where ϕ is the is the diameter of the aperture of the lens, F # is the F number of the lens, M δ x represents the speckle size obtained on the Z 2 -plane.

4. Experiment Datasets and Evaluation Metrics

4.1. Data Collection

4.1.1. Sinusoidal Datasets

According to the speech frequency range in Table 1, we set the frequency of the sinusoidal signal to gradually change from 80 to 1600 Hz, with an amplitude of 1. Irradiate the laser generated by the He-Ne laser on five vibration objects near the sound source: carton, A4 paper, plastic cup, paper cap, and leaf. Use the CCD camera to collect the speckle image sequence generated. The number of speckle images is shown in Table 4.

4.1.2. Two Laser Datasets

In order to verify the versatility of the algorithm under different laser systems, we use the He-Ne laser and the fiber laser to irradiate the carton near the sound source, and the number of speckle images collected is shown in Table 5.

4.1.3. Five Vibration Object Datasets

To verify the performance of the algorithm under different vibration objects, we use real speech as the sound source signal, project the laser onto different vibration objects, and collect the corresponding laser speckle image sequences respectively. The number of speckle images collected by five vibration objects is shown in Table 6.

4.2. Evaluation Metrics

This paper needs to calculate the correlation between the original audio signal and the speech signal reconstructed from speckle images. Since the start time of the two signals cannot be accurately aligned, the waveform correlation coefficient in the time domain is meaningless. To evaluate the accuracy of a reconstructed speech, we use the frequency spectrum correlation coefficient as the evaluation metrics. The frequency spectrum correlation coefficient here refers to the amplitude correlation coefficient of the frequency spectrum in detail.
As we know, scaling speech amplitude will affect the loudness of the speech, not the timbre. Therefore, using the frequency spectrum correlation coefficient can quantify the accuracy of speech reconstruction.
The value of the correlation coefficient is between -1 and 1. When it is 1, the two speeches are equal or proportional. When it is −1, one speech is equal to or proportional linearly to another negative speech.
ρ x , y = 1 N 1 i = 1 N x i μ x σ x y i μ y σ y
where x and y represent the amplitude signal of discrete frequency spectrum signals with the same sampling points. N is the total number of samples. μ x , μ y is the mean value of x and y accordingly. σ x , σ y is the standard deviation of x and y accordingly. ρ is the correlation coefficient.

5. Results

5.1. Performance on Sinusoidal Datasets

We recorded the changes of the frequency spectrum correlation coefficient between the reconstructed speech and the original speech after using the speech enhancement algorithm. We used the speech enhancement algorithm in this paper to process the sinusoidal datasets to verify whether the algorithm can increase the frequency spectrum correlation coefficient between the reconstructed sinusoidal signal and the original sinusoidal audio. Table 7 shows the sinusoidal signal reconstruction accuracy in detail. Figure 13 shows the line graph of the frequency spectrum correlation coefficient of reconstructed sinusoidal signal on different vibration objects. The existence of frequency response has a great impact on the reconstruction of sinusoidal signal, which will make the spectral correlation coefficient low. The sinusoidal signal dataset only verifies whether our proposed speech enhancement algorithm can reduce the frequency response; it cannot be verified whether the algorithm can also play a positive role in real speech. Therefore, the following experimental results are used to verify the enhancement effect of this algorithm on real speech.

5.2. Performance on Two Laser Datasets

Using the same speech as the as the audio at the sound source, two lasers are used to irradiate the same vibration object (carton), and the CCD camera is used to collect the laser speckle image sequences. Table 8 is the comparison of the frequency spectrum correlation coefficient with and without speech enhancement. Figure 14 shows the reconstructed speech and original speech waveforms using the He-Ne laser. Figure 15 shows the reconstructed speech and original speech waveforms using the fiber laser. In the waveform comparison, we can also clearly find that the waveform of the speech is significantly improved after the speech enhancement.

5.3. Performance on Five Vibration Object Datasets

Table 9 shows the difference in frequency spectrum correlation coefficient between the reconstructed speech signal and the original speech using DIC and DIC + speech enhancement. The results show that after using the speech enhancement algorithm, the accuracy of speech signals reconstructed from different vibration objects is improved. This further proves that our algorithm has strong adaptability to different environments and has the practical application value.
The results in Table 9 show that the accuracy of speech reconstruction for some datasets is improved slightly after using speech enhancement algorithm. We analyzed the reasons for this phenomenon as follows: when collecting these datasets, the surrounding environment has little interference, which ultimately makes the accuracy of speech reconstruction high. This can also be proved by the fact that the frequency spectrum correlation coefficient before speech enhancement is higher than that of other datasets. Figure 16 shows the line graph of the frequency spectrum correlation coefficient of reconstructed speech signals on different vibration objects.

5.4. Discussions

Our speech enhancement algorithm is the speech process after the speech reconstruction algorithm, so whether the algorithm can be used in real time depends on the speed of the speech reconstruction algorithm. In the following content, we discuss some key factors that affect the performance of speech enhancement algorithm:
(1)
Loudspeaker performance: In order to provide a benchmark for frequency response, we use a sinusoidal signal which frequency varies from 80 to 1600 Hz and amplitude remains constant at 1 as the audio at the sound source. However, there will be errors in the sinusoidal signal played by the loudspeaker. These errors will be fewer if the loudspeaker performance is better.
(2)
Environmental noise and platform vibration: In the experiment, we find that the environmental noise and the vibration of the experimental platform seriously interfere with the sinusoidal signal reconstruction, and then affect the frequency spectrum correlation coefficient between the original sinusoidal signal and the reconstructed signal. The reconstructed sinusoidal signal contains environmental noise and vibration of the experimental platform, but the original signal is a clean and impurity-free speech, and the correlation coefficient between the two is low. Therefore, a quiet environment and a stable experimental platform are conducive to the verification of experimental results.

6. Conclusions

The main innovation of this paper is to propose a speech enhancement algorithm to solve the frequency response problem of speech signal reconstruction from laser speckle images. The experimental results in this paper show that our speech enhancement algorithm can effectively improve the accuracy of reconstructed speech, and the comparison of the reconstruction accuracy of different vibration object datasets verifies the practicability of the algorithm. In the experimental results, we found that when the ambient noise is small and the vibration interference of the experimental platform is small, the reconstructed speech accuracy is high. Our algorithm achieves a major advance of reconstructing speech from laser speckle images.
Although our speech enhancement algorithm has an obvious effect on reducing the frequency response problem, it also introduces some noise signals while increasing the accuracy of the reconstructed speech. In order to obtain higher-definition speech signals, we also need to make further improvements to the speech enhancement algorithm. In the future, we will focus on reconstructing speech with a small frequency response and high definition.

Author Contributions

Conceptualization, H.Z.; Data curation, X.H.; Formal analysis, X.H.; Funding acquisition, D.Z. and H.Z.; Investigation, X.H.; Methodology, X.H.; Project administration, H.Z.; Resources, H.Z.; Software, X.H.; Supervision, D.Z., X.W., L.Y. and H.Z.; Validation, L.Y. and H.Z.; Visualization, X.H.; Writing—original draft, X.H.; Writing—review & editing, H.Z. All authors have read and agreed to the published version of the manuscript.

Funding

This research received no external funding.

Institutional Review Board Statement

Not applicable.

Informed Consent Statement

Not applicable.

Data Availability Statement

Not applicable.

Conflicts of Interest

The authors declare no conflict of interest.

References

  1. Clavel, C.; Ehrette, T.; Richard, G. Events Detection for an Audio-Based Surveillance System. In Proceedings of the 2005 IEEE International Conference on Multimedia and Expo, Amsterdam, The Netherlands, 6 July 2005; pp. 1306–1309. [Google Scholar]
  2. Zieger, C.; Brutti, A.; Svaizer, P. Acoustic Based Surveillance System for Intrusion Detection. In Proceedings of the 2009 Sixth IEEE International Conference on Advanced Video and Signal Based Surveillance, Genova, Italy, 2–4 September 2009; pp. 314–319. [Google Scholar]
  3. Castellini, P.; Martarelli, M.; Tomasini, E.P. Laser doppler vibrometry: Development of advanced solutions answering to technology’s needs. Mech. Syst. Signal Process. 2006, 20, 1265–1285. [Google Scholar] [CrossRef]
  4. Li, R.; Wang, T.; Zhu, Z.; Xiao, W. Vibration characteristics of various surfaces using an LDV for long-range voice acquisition. IEEE Sens. J. 2010, 11, 1415–1422. [Google Scholar] [CrossRef]
  5. Li, R.; Madampoulos, N.; Zhu, Z.; Xie, L. Performance comparison of an all-fiber-based laser Doppler vibrometer for remote acoustical signal detection using short and long coherence length lasers. Appl. Opt. 2012, 51, 5011–5018. [Google Scholar] [CrossRef] [PubMed]
  6. Rothberg, S.J.; Allen, M.S.; Castellini, P.; Maio, D.D.; Dirckx, J.; Ewins, D.J.; Halkon, B.J.; Muyshondt, P.; Paone, N.; Ryan, T. An international review of laser doppler vibrometry: Making light work of vibration measurement. Opt. Lasers Eng. 2017, 99, 11–22. [Google Scholar] [CrossRef] [Green Version]
  7. Wu, S.S.; Lv, T.; Han, X.Y.; Yan, C.H.; Zhang, H.Y. Remote audio signals detection using a partial-fiber laser Doppler vibrometer. Appl. Acoust. 2018, 130, 216–221. [Google Scholar] [CrossRef]
  8. Garg, P.; Nasimi, R.; Ozdagli, A.; Zhang, S.; Mascarenas, D.; Taha, M.R.; Moreu, F. Measuring transverse displacements using unmanned aerial systems laser doppler vibrometer (UAS-LDV): Development and field validation. Sensors 2020, 20, 6051. [Google Scholar] [CrossRef]
  9. Matoba, O.; Inokuchi, H.; Nitta, K.; Awatsuji, Y. Optical voice recorder by off-axis digital holography. Opt. Lett. 2014, 39, 6549–6552. [Google Scholar] [CrossRef]
  10. Ishikawa, K.; Tanigawa, R.; Yatabe, K.; Oikawa, Y.; Onuma, T.; Niwa, H. Simultaneous imaging of flow and sound using high-speed parallel phase-shifting interferometry. Opt. Lett. 2018, 43, 991–994. [Google Scholar] [CrossRef]
  11. Bianchi, S. Vibration detection by observation of speckle patterns. Appl. Opt. 2014, 53, 931–936. [Google Scholar] [CrossRef]
  12. Davis, A.; Rubinstein, M.; Wadhwa, N.; Mysore, G.J.; Durand, F.; Freeman, W.T. The visual microphone: Passive recovery of sound from video. ACM Trans. Graph. 2014, 33, 1–10. [Google Scholar] [CrossRef]
  13. Peters, W.H.; Ranson, W.F. Digital imaging techniques in experiment stress analysis. Opt. Eng. 1982, 21, 427–431. [Google Scholar] [CrossRef]
  14. Yamaguchi, I. Advances in the laser speckle strain gauge. Opt. Eng. 1988, 27, 214–218. [Google Scholar] [CrossRef]
  15. Zhu, G.; Yao, X.R.; Qiu, P.; Mahmood, W.; Yu, W.K.; Sun, Z.B.; Zhai, G.J.; Zhao, Q. Sound recovery via intensity variations of speckle pattern pixels selected with variance-based method. Opt. Eng. 2018, 57, 026117. [Google Scholar] [CrossRef]
  16. Wu, N.; Haruyama, S. The 20k samples-per-second real time detection of acoustic vibration based on displacement estimation of one-dimensional laser speckle images. Sensors 2021, 21, 2938. [Google Scholar] [CrossRef] [PubMed]
  17. Goodman, J.W. Speckle Phenomena in Optics: Theory and Application; Roberts and Company: Placerville, CA, USA, 2007. [Google Scholar]
  18. Zalevsky, Z.; Beiderman, Y.; Margalit, I.; Gingold, S.; Teicher, M.; Mico, V.; Garcia, J. Simultaneous remote extraction of multiple speech sources and heart beats from secondary speckles pattern. Opt. Express 2009, 17, 21566–21580. [Google Scholar] [CrossRef] [PubMed] [Green Version]
  19. Hua, T.; Xie, H.; Wang, S.; Hu, Z.; Chen, P.; Zhang, Q. Evaluation of the quality of a speckle pattern in the digital image correlation method by mean subset fluctuation. Opt. Laser Technol. 2011, 43, 9–13. [Google Scholar] [CrossRef]
  20. Blaber, J.; Adair, B.; Antoniou, A. Ncorr: Open-source 2d digital image correlation matlab software. Exp. Mech. 2015, 55, 1105–1122. [Google Scholar]
  21. Li, L.; Gubarev, F.A.; Klenovskii, M.S.; Bloshkina, A.I. Vibration measurement by means of digital speckle correlation. In Proceedings of the 2016 International Siberian Conference on Control and Communications (SIBCON), Moscow, Russia, 12–14 May 2016; Institute of Electrical and Electronics Engineers (IEEE): New York, NY, USA, 2016; pp. 1–5. [Google Scholar]
  22. Hu, W.; Miao, H. Sub-pixel displacement algorithm in temporal sequence digital image correlation based on correlation coefficient weighted fitting. Opt. Lasers Eng. 2018, 110, 410–414. [Google Scholar] [CrossRef]
  23. Duadi, D.; Ozana, N.; Shabairou, N.; Wolf, M.; Zalevsky, Z.; Primov-Fever, A. Non-contact optical sensing of vocal fold vibrations by secondary speckle patterns. Opt. Express 2020, 28, 20040–20050. [Google Scholar] [CrossRef]
  24. Liushnevskaya, Y.D.; Gubarev, F.A.; Li, L.; Nosarev, A.V.; Gusakova, V.S. Measurement of whole blood coagulation time by laser speckle pattern correlation. Biomed. Eng. 2020, 54, 262–266. [Google Scholar] [CrossRef]
  25. Wu, N.; Haruyama, S. Real-time sound detection and regeneration based on optical flow algorithm of laser speckle images. In Proceedings of the 2019 28th Wireless and Optical Communications Conference (WOCC), Beijing, China, 9–10 May 2019; Institute of Electrical and Electronics Engineers (IEEE): New York, NY, USA, 2019; pp. 1–4. [Google Scholar]
  26. Wu, N.; Haruyama, S. Real-time audio detection and regeneration of moving sound source based on optical flow algorithm of laser speckle images. Opt. Express 2020, 28, 4475–4488. [Google Scholar] [CrossRef] [PubMed]
  27. Chen, Z.; Wang, C.; Huang, C.; Fu, H.; Luo, H.; Wang, H. Audio signal reconstruction based on adaptively selected seed points from laser speckle images. Opt. Commun. 2014, 331, 6–13. [Google Scholar] [CrossRef]
  28. Zhu, G.; Yao, X.-R.; Sun, Z.-B.; Qiu, P.; Wang, C.; Zhai, G.-J.; Zhao, Q. A High-Speed Imaging Method Based on Compressive Sensing for Sound Extraction Using a Low-Speed Camera. Sensors 2018, 18, 1524. [Google Scholar] [CrossRef] [Green Version]
  29. Heikkinen, J.; Schajer, G.S. A geometric model of surface motion measurement by objective speckle imaging. Opt. Lasers Eng. 2020, 124, 105850. [Google Scholar] [CrossRef]
  30. Zhu, D.; Yang, L.; Li, Z.; Zeng, H. Remote speech extraction from speckle image by convolutional neural network. In Proceedings of the 2020 IEEE Symposium on Computers and Communications (ISCC), Rennes, France, 7 July 2020; pp. 1–6. [Google Scholar]
  31. Ma, C.; Ren, Q.; Zhao, J. Optical-numerical method based on a convolutional neural network for full-field subpixel displacement measurements. Opt. Express 2021, 29, 9137–9156. [Google Scholar] [CrossRef] [PubMed]
  32. Liu, C.; Li, L.; Huang, X.; Wang, X.; Wang, C. Audio signal extraction and enhancement based on cnn from laser speckles. IEEE Photonics J. 2021, 14, 1–5. [Google Scholar] [CrossRef]
  33. Zhu, D.; Yang, L.; Zeng, H. Remote recovery of sound from speckle pattern video based on convolutional LSTM. In Proceedings of the International Conference on Information and Communications Security (ICICS), Chongqing, China, 29 November 2021; pp. 110–124. [Google Scholar]
  34. Barcellona, C.; Halpaap, D.; Amil, P.; Buscarino, A.; Fortuna, L.; Tiana-Alsina, J.; Masoller, C. Remote recovery of audio signals from videos of optical speckle patterns: A comparative study of signal recovery algorithms. Opt. Express 2020, 28, 8716–8723. [Google Scholar] [CrossRef]
  35. Boas, D.A.; Dunn, A.K. Laser speckle contrast imaging in biomedical optics. J. Biomed. Opt. 2010, 15, 011109. [Google Scholar] [CrossRef] [Green Version]
  36. Bi, R.; Du, Y.; Singh, G.; Ho, J.H.; Zhang, S.; Attia, A.B.E.; Li, X.; Olivo, M.C. Fast pulsatile blood flow measurement in deep tissue through a multimode detection fiber. J. Biomed. Opt. 2020, 25, 055003. [Google Scholar] [CrossRef]
Figure 1. Theoretical description of the system. The conversion between object vibration and speckle pattern displacement.
Figure 1. Theoretical description of the system. The conversion between object vibration and speckle pattern displacement.
Sensors 23 00330 g001
Figure 2. Digital image correlation method.
Figure 2. Digital image correlation method.
Sensors 23 00330 g002
Figure 3. Original sinusoidal signal. (a) Time domain diagram. (b) Frequency domain diagram.
Figure 3. Original sinusoidal signal. (a) Time domain diagram. (b) Frequency domain diagram.
Sensors 23 00330 g003
Figure 4. Reconstructed sinusoidal signal when a carton is the vibration object. (a) Time domain diagram. (b) Frequency domain diagram.
Figure 4. Reconstructed sinusoidal signal when a carton is the vibration object. (a) Time domain diagram. (b) Frequency domain diagram.
Sensors 23 00330 g004
Figure 5. Reconstructed sinusoidal signal when a paper cup is the vibration object. (a) Time domain diagram. (b) Frequency domain diagram.
Figure 5. Reconstructed sinusoidal signal when a paper cup is the vibration object. (a) Time domain diagram. (b) Frequency domain diagram.
Sensors 23 00330 g005
Figure 6. Speech enhancement algorithm.
Figure 6. Speech enhancement algorithm.
Sensors 23 00330 g006
Figure 7. The process of speech enhancement when the vibration object is a carton.
Figure 7. The process of speech enhancement when the vibration object is a carton.
Sensors 23 00330 g007
Figure 8. Schematic diagram of speckle image formation and collection. The laser beam generated by laser S irradiates on object O with a rough surface near the sound source (P is a point on object O surface), and then the reflected light interferes to form speckle. C is the camera sensor (the size is exaggerated).
Figure 8. Schematic diagram of speckle image formation and collection. The laser beam generated by laser S irradiates on object O with a rough surface near the sound source (P is a point on object O surface), and then the reflected light interferes to form speckle. C is the camera sensor (the size is exaggerated).
Sensors 23 00330 g008
Figure 9. Experimental setup diagram. The laser beam generated by laser A is projected onto object M on the N platform. The change of speckle is collected by B camera, and these images are transmitted to the D computer through the C interface to extract the movement between speckle images.
Figure 9. Experimental setup diagram. The laser beam generated by laser A is projected onto object M on the N platform. The change of speckle is collected by B camera, and these images are transmitted to the D computer through the C interface to extract the movement between speckle images.
Sensors 23 00330 g009
Figure 10. Simulation diagram of the He-Ne laser system.
Figure 10. Simulation diagram of the He-Ne laser system.
Sensors 23 00330 g010
Figure 11. Simulation diagram of the fiber laser system.
Figure 11. Simulation diagram of the fiber laser system.
Sensors 23 00330 g011
Figure 12. Equipment physical map.
Figure 12. Equipment physical map.
Sensors 23 00330 g012
Figure 13. Accuracy comparison of sinusoidal signal reconstruction with and without enhancement.
Figure 13. Accuracy comparison of sinusoidal signal reconstruction with and without enhancement.
Sensors 23 00330 g013
Figure 14. Comparison of speech signals reconstructed using the He-Ne Laser. (a) Original speech signal. (b) Speech signal reconstructed by DIC. (c) Speech signal reconstructed by DIC + speech enhancement.
Figure 14. Comparison of speech signals reconstructed using the He-Ne Laser. (a) Original speech signal. (b) Speech signal reconstructed by DIC. (c) Speech signal reconstructed by DIC + speech enhancement.
Sensors 23 00330 g014
Figure 15. Comparison of speech signals reconstructed using the fiber laser. (a) Original speech signal. (b) Speech signal reconstructed by DIC. (c) Speech signal reconstructed by DIC + speech enhancement.
Figure 15. Comparison of speech signals reconstructed using the fiber laser. (a) Original speech signal. (b) Speech signal reconstructed by DIC. (c) Speech signal reconstructed by DIC + speech enhancement.
Sensors 23 00330 g015
Figure 16. Accuracy comparison of speech signal reconstruction with and without enhancement.
Figure 16. Accuracy comparison of speech signal reconstruction with and without enhancement.
Sensors 23 00330 g016
Table 1. Frequency range of common speech (Hz).
Table 1. Frequency range of common speech (Hz).
Sound AreaMaleFemale
Bass82–39282–392
Midrange123–493123–493
Treble164–698220–1100
Fundamental frequency range64–523160–1200
Table 2. Basic parameters of the He-Ne laser.
Table 2. Basic parameters of the He-Ne laser.
ParameterQuantityUnit
Wavelength632.8nm
Operating current4~6mA
Rated voltage220 ± 22V
Rated frequency50Hz
Rated input power<20W
Table 3. Basic parameters of the fiber laser.
Table 3. Basic parameters of the fiber laser.
ParameterQuantityUnit
Center wavelength635nm
Continuous output power60mW
Operating voltage2.55V
Threshold current70mA
Operating current190mA
Table 4. The number of speckle images acquisitions for sinusoidal datasets.
Table 4. The number of speckle images acquisitions for sinusoidal datasets.
Vibration ObjectNumber of Speckle ImagesAcquisition Time (s)
Carton32,34410.1
A4 paper35,95211.2
Plastic cup31,94110.0
Paper cup32,97010.3
Leaf33,42710.4
Table 5. The number of speckle images acquisitions for two laser datasets.
Table 5. The number of speckle images acquisitions for two laser datasets.
LaserNumber of Speckle ImagesAcquisition Time (s)
He-Ne laser16,2135.1
Fiber laser18,9755.9
Table 6. The number of speckle images acquisitions for five vibration object datasets.
Table 6. The number of speckle images acquisitions for five vibration object datasets.
Vibration ObjectNumber of Speckle ImagesAcquisition Time (s)
Carton 18,1405.7
A4 paper16,7855.2
Plastic cup19,4436.1
Paper cup19,2476.0
Leaf18,6475.8
Table 7. Comparison of sinusoidal signal reconstruction accuracy.
Table 7. Comparison of sinusoidal signal reconstruction accuracy.
Vibration ObjectDICDIC + Speech
Enhancement
Improve
Carton 0.29680.454853.23%
A4 paper0.33480.456136.23%
Plastic cup0.25930.473182.45%
Paper cup0.28440.440955.03%
Leaf0.24600.396361.10%
Table 8. Comparison of the speech reconstruction accuracies of two laser datasets.
Table 8. Comparison of the speech reconstruction accuracies of two laser datasets.
LaserDICDIC + Speech
Enhancement
Improve
He-Ne laser 0.36240.566856.40%
Fiber laser0.41550.584940.77%
Table 9. Comparison of the speech reconstruction accuracies of five vibration object datasets.
Table 9. Comparison of the speech reconstruction accuracies of five vibration object datasets.
Vibration ObjectDICDIC + Speech
Enhancement
Improve
Carton 0.41070.623451.79%
A4 paper0.62800.64783.15%
Plastic cup0.42440.478612.77%
Paper cup0.38830.586250.97%
Leaf0.44160.47357.22%
Disclaimer/Publisher’s Note: The statements, opinions and data contained in all publications are solely those of the individual author(s) and contributor(s) and not of MDPI and/or the editor(s). MDPI and/or the editor(s) disclaim responsibility for any injury to people or property resulting from any ideas, methods, instructions or products referred to in the content.

Share and Cite

MDPI and ACS Style

Hao, X.; Zhu, D.; Wang, X.; Yang, L.; Zeng, H. A Speech Enhancement Algorithm for Speech Reconstruction Based on Laser Speckle Images. Sensors 2023, 23, 330. https://doi.org/10.3390/s23010330

AMA Style

Hao X, Zhu D, Wang X, Yang L, Zeng H. A Speech Enhancement Algorithm for Speech Reconstruction Based on Laser Speckle Images. Sensors. 2023; 23(1):330. https://doi.org/10.3390/s23010330

Chicago/Turabian Style

Hao, Xueying, Dali Zhu, Xianlan Wang, Long Yang, and Hualin Zeng. 2023. "A Speech Enhancement Algorithm for Speech Reconstruction Based on Laser Speckle Images" Sensors 23, no. 1: 330. https://doi.org/10.3390/s23010330

APA Style

Hao, X., Zhu, D., Wang, X., Yang, L., & Zeng, H. (2023). A Speech Enhancement Algorithm for Speech Reconstruction Based on Laser Speckle Images. Sensors, 23(1), 330. https://doi.org/10.3390/s23010330

Note that from the first issue of 2016, this journal uses article numbers instead of page numbers. See further details here.

Article Metrics

Back to TopTop