Next Article in Journal
Research and Application of Multi-Mode Joint Monitoring System for Shaft Wall Deformation
Next Article in Special Issue
Intelligent Healthcare System Using Mathematical Model and Simulated Annealing to Hide Patients Data in the Low-Frequency Amplitude of ECG Signals
Previous Article in Journal
Visual Servoing Approach to Autonomous UAV Landing on a Moving Vehicle
Previous Article in Special Issue
A Novel Deep-Learning Model Compression Based on Filter-Stripe Group Pruning and Its IoT Application
 
 
Font Type:
Arial Georgia Verdana
Font Size:
Aa Aa Aa
Line Spacing:
Column Width:
Background:
Article

Wavelet-Domain Information-Hiding Technology with High-Quality Audio Signals on MEMS Sensors

1
School of Computer Science, Yangtze University, Jingzhou 434025, China
2
Department of Applied Mathematics, Tunghai University, Taichung City 407224, Taiwan
3
Department of Mathematics, University of Michigan, Flint, MI 48502, USA
*
Authors to whom correspondence should be addressed.
Sensors 2022, 22(17), 6548; https://doi.org/10.3390/s22176548
Submission received: 20 July 2022 / Revised: 21 August 2022 / Accepted: 27 August 2022 / Published: 30 August 2022
(This article belongs to the Special Issue Digital Signal Processing for Modern Technology)

Abstract

:
Due to the rapid development of sensor technology and the popularity of the Internet, not only has the amount of digital information transmission skyrocketed, but also its acquisition and dissemination has become easier. The study mainly investigates audio security issues with data compression for private data transmission on the Internet or MEMS (micro-electro-mechanical systems) audio sensor digital microphones. Imperceptibility, embedding capacity, and robustness are three main requirements for audio information-hiding techniques. To achieve the three main requirements, this study proposes a high-quality audio information-hiding technology in the wavelet domain. Due to the fact that wavelet domain provides a useful and robust platform for audio information hiding, this study applies multi-coefficients of discrete wavelet transform (DWT) to hide information. By considering a good, imperceptible concealment, we combine signal-to-noise ratio (SNR) with quantization embedding for these coefficients in a mathematical model. Moreover, amplitude-thresholding compression technology is combined in this model. Finally, the matrix-type Lagrange principle plays an essential role in solving the model so as to reduce the carrying capacity of network transmission while protecting personal copyright or private information. Based on the experimental results, we nearly maintained the original quality of the embedded audio by optimization of signal-to-noise ratio (SNR). Moreover, the proposed method has good robustness against common attacks.

1. Introduction

The uses of digital information transmission in Internet applications, artificial intelligence, and data sensing [1,2,3,4,5,6,7,8,9,10] are more frequent. In many cases, without permission from the legal owner, the digital information is often stolen, copied, or even turned into profit by criminal individuals. In general, an audio information-hiding technique should possess three properties: make the piece of hidden information imperceptible in the embedded audio, provide a signal-to-noise ratio (SNR) of 20 dB or more, and maintain the embedding capacity of at least 20 bps (bits per second) [11,12]. Moreover, hidden information is resistant to most attacks, which include re-sampling, MP3 compression, filtering, amplitude modification, time scaling, and so on [12,13,14,15].
Audio information-hiding techniques are classified according to their domain. These algorithms are categorized as time-domain techniques and transform-domain techniques. Discrete wavelet transform (DWT) is one practical transform domain for hiding audio information [13,14,15,16,17,18,19,20,21,22,23,24,25,26,27]. In the literature, several earlier procedures embedded watermarks into DWT low-frequency coefficients using the quantization-based technique so that they could obtain adaptive performance [15,20,24]. Chen et al. [15,24] proposed an optimization quantization approach to fixed-weighting DWT coefficients to gain high-quality modified audio and high robustness against many common attacks. Li et al. [27] proposed a new audio watermarking technique. They performed the norm ratio on fixed-scaling DWT coefficient quantization without considering the signal compression in the implementation. In addition, the quality of modified audio worsens with weighting variation, and hidden information is inadequately robust to time-scaling attacks.
This study proposes an optimization model to integrate optimization-based signal steganography [24] with threshold-based compression in the wavelet domain. Firstly, we utilized binary digits to store data and represent information. We modified the signal-to-noise ratio (SNR) and the amplitude-quantization rules in the wavelet domain as performance index and constraints. At the same time, to reduce the amount of the embedded audio data signal, we also employed threshold-based compression technology in the constrictions. Then, we obtained an optimization model that enhanced the audio quality in the information-hiding and compression processes. Secondly, the optimization model was solved by the matrix-type Lagrange principle and graphic illustration Accordingly, we performed information hiding and data compression on each audio signal for private information transmission on the Internet or MEMS (micro-electro-mechanical systems) audio sensor digital microphones. On the other end of the transmission process, the hidden information was extracted smoothly without either the original audio or the recovery of the compressed audio signal adopting a cubic spline. To demonstrate the quality of the proposed performance, we measured the appropriate threshold ε and embedding strength Q in our experiment. The proposed algorithm reduces the amount of carried network transmission but preserves the original audio signals and protects personal privacy.
The rest of this study is as follows: Section 2 presents the proposed method and introduces the embedding technique, optimization model, and compressions. The illustrations of the optimization model, presentation of the recovery method, and the extraction technique are in Section 3. Section 4 contains discissions of experimental results, and some remarks and conclusions are in Section 5.

2. Proposed Method

This section introduces the embedding technique, the extraction technique, and the compression of the proposed method. Figure 1 shows the block diagram of the proposed algorithm; further detailed introduction will appear in Section 2.1 and Section 2.2.

2.1. Embedding Technique

To embed the private information into the lowest DWT coefficients, we implemented DWT using the single prototype function ψ ( x ) . This function is regulated by a scaling parameter and a shift parameter [28,29]. The discrete normalized scaling and wavelet basis function was defined as
φ i , n ( t ) = 2 i 2 h i ( 2 i t n )
ψ i , n ( t ) = 2 i 2 g i ( 2 i t n )
where i and n are the dilation and translation parameters, and h i and g i denote low-pass and high-pass filters. Orthogonal wavelet basis functions provide simple calculation of coefficient expansion and easily express audio signals S ( t ) L 2 ( R ) as a series expansion of orthogonal scaling functions and wavelets. Throughout this study, we used the host digital audio signal S ( n ) , n N , to denote samples of the original audio signal at the nth sample time, and each piece of audio signal was cut into segments on which DWT was performed. As a result, the signal-to-noise ratio (SNR)
S N R = 10 log 10 ( S ˜ ( n ) S ( n ) 2 2 / S ( n ) 2 2 ) can be rewritten as
S N R = 10 log 10 ( X ˜ n X n 2 2 / X n 2 2 )
where S ˜ ( n ) is the modified digital audio signal and the vector form X ˜ n = [ | x ˜ 1 | | x ˜ 2 | | x ˜ n | ] T consists of the n unknown absolute values of DWT coefficients with respect to the original DWT coefficient vector X n = [ | x 1 | | x 2 | | x n | ] T in each segment.
For convenience, the secret information is usually stored as a binary sequence. To embed the binary bit “ 1 B ” or “ 0 B ” as shown in Figure 1, we performed DWT and then determined n unknown values of DWT coefficients, x ˜ 1 , x ˜ 2 , , x ˜ n . Accordingly, an optimization-based model for embedding the binary bit was proposed as follows.
We determined the vector X ˜ n such that the S N R = 10 log 10 ( X ˜ n X n 2 2 / X n 2 2 ) is maximized. Due to the fact that all logarithmic functions are one-to-one, that is, for all x and y in the domain of logarithmic function, if log 10 x = log 10 y , then x = y . We defined a performance index of the form X ˜ n X n 2 / X n 2 so that the binary sequence with binary bit “ 1 B ” or “ 0 B ” can be embedded by the proposed optimization model described below:
  • If the bit “ 1 B ” is embedded into X n , then i = 1 n | x i | is quantized by
    m i n i m i z e ( X ˜ n X n ) T ( X ˜ n X n ) X n T X n
    s u b j e c t e d   t o   ( a )   A X ˜ n = i = 1 n | x i | Q Q + 3 4 Q  
    (b) Compression constraint;
  • If the bit “ 0 B ” is embedded into X n , then i = 1 n | x i | is quantized by
    m i n i m i z e ( X ˜ n X n ) T ( X ˜ n X n ) X n T X n
    s u b j e c t e d   t o   ( a )   A X ˜ n = i = 1 n | x i | Q Q + 1 4 Q  
    (b) Compression constraint;
where is the floor function; Q is the quantization size or embedding strength which is adopted as the secret key K ; the compression constraint is described in Equations (6)–(8) in Section 2.2.

2.2. Compression Constraint

Solving the optimization models (4) and (5), the watermarked audio signal S ¯ with optimal SNR is obtained after applying the IDWT. To reduce the amount of data when transmitting on the Internet, we compressed the embedded audio signal s ¯ using the threshold compression method formulated as follows:
s ^ 0 = s ¯ 0 , s ^ N = s ¯ N
s ^ i = { ϕ if | s ¯ i 1 s ¯ i + 1 | < ε s ¯ i otherwise ,   i = { 1 , N 1 }
where ε represents the threshold.
To recover the signal { s ¯ i } i = 0 N from the compressed signal { s ^ i } i = 0 N , we used the cubic function, which is formulated as f i ( t ) = α i + β i ( t t i ) + γ i ( t t i ) 2 + η i ( t t i ) 3 . We found the N cloud-gauge line collection of functions { f i ( t ) | i = 1 , , N } to describe the entire set of data, where f i ( t ) must satisfy
f i ( t i ) = s ^ i = f i 1 ( t i ) ,   f i ( t i ) = f i 1 ( t i ) ,   f i ( t i ) = f i 1 ( t i ) ,   f 1 ( t ) = f N ( t ) = 0
To ensure the recovery quality, we adjusted the compression threshold ε while considering Q to better fit the optimal SNR.

3. Proposed Optimization Solution in Embedding and Extraction Method

In this section, we solve the optimization problem described in models (4) and (5) in two steps. Since the optimization problems (4) and (5) are similar, we first solve (4) and then apply the optimal solution to (5) using the same method.

3.1. First Step in Finding the Optimal Solution

Applying Theorems A1 and A2 introduced in Appendix A, the Lagrange multiplier λ is utilized to combine (4a) and (4b) into a function F without any constraints,
F ( X ˜ n , λ ) = ( X ˜ n X n ) T ( X ˜ n X n ) X n T X n + λ { A X ˜ n u 1 } ,  
where setting i = 1 n | x i | Q Q + 3 4 Q = u 1 . The necessary conditions for minimizing F ( X ˜ n , λ ) are
F X ˜ n = 2 ( X ˜ n X n ) + A T λ X n T X n = 0
F λ = A X ˜ n u 1 = 0
Multiplying (10a) by A to observe that
2 ( A X ˜ n A X n ) + A A T λ X n T X n = 0
Since A X ˜ n = u 1 , Equation (11) can be rewritten as
u 1 A X n + 1 2 A A T λ = 0
Hence, the optimal solution of λ is
λ * = 2 ( A A T ) 1 [ A X n u 1 ]
Moreover, by substituting (13) into (10a), the optimal DWT coefficients are
X ˜ n * = X n 1 2 A T λ * = X n A T ( A A T ) 1 [ A X n u 1 ]
where the superscript * denotes the optimal result with respect to the corresponding variable.

3.2. Audio Recovery and Information Extraction

To extract the hidden confidential data, we first recover the signal { s ¯ i } i = 0 N from the compressed signal { s ^ i } i = 0 N using the cubic function, which is formulated as f i ( t ) = α i + β i ( t t i ) + γ i ( t t i ) 2 + η i ( t t i ) 3 . We found that the N cloud gauge line collection of functions { f i ( t ) | i = 1 , , N } to describe the entire set of data, where f i ( t ) must satisfy f i ( t i ) = s ^ i = f i 1 ( t i ) , f i ( t i ) = f i 1 ( t i ) , f i ( t i ) = f i 1 ( t i ) , f 1 ( t ) = f N ( t ) = 0 .
Next, we extract the hidden information from the DWT coefficients { c ¯ i } i = 0 N of the recovered audio signal { s ¯ i } i = 0 N according to the following steps:
Split the test audio into segments and perform DWT on each segment. If X ˜ * n = { | x ˜ * 1 | , | x ˜ * 2 | , , | x ˜ * n | } presents n consecutive DWT lowest-frequency coefficients, the binary sequence is extracted from X ˜ * n by the following proposed extraction technique:
  • If
i = 1 n | x ˜ i * | i = 1 n | x ˜ i * | Q Q Q 2 ,  
then the extracted value is 1 .
  • If
i = 1 n | x ˜ i * | i = 1 n | x ˜ i * | Q Q < Q 2 ,
then the extracted value is 0 .
Finally, the hidden information is recovered from the binary sequence. In addition, to closely monitor the accuracy of the extracted private data, its ratio of bit errors (BER) is measured to check if an attack occured. The BER is usually expressed as a percentage and can be formulated as BER = ( B e r r o r / B t o t a l ) × 100 % , where B e r r o r and B t o t a l denote the numbers of error binary bits and total binary bits during a tested period.

3.3. Application Scenarios of Our Proposal

The model and techniques proposed in this study combine information hiding and data compression of audio signals. During network transmission, if the amount of data (including the hidden private information) is large and the network speed is slow, the compression ratio can be increased to save transmission time; on the contrary, if the amount of data is small or the network speed is fast, one can just perform information hiding without performing data compression to improve the accuracy of the data transmission. This is the biggest difference between the proposed method and other methods.

4. Experimental Results

This section presents experimental results from testing the proposed algorithm. Without loss of generality, we investigate various forms of audio signals, such as love songs, symphonies, and dance and folkloric music. Ten songs per audio were averaged to evaluate the performance of the proposed method. These mono-type signals achieved a sampling rate of 44.1 kHz, which means there were 512,000 samples in each piece of selected information. They all came with a bit depth of 16 bits and 11.6 s in length. In the embedding procedure, each audio signal with 512,000 samples was initially cut into four segments of equal length; an 8-level discrete wavelet transform was performed on each evenly cut piece. This process ensured each piece of data had the total number of lowest-frequency coefficients 512000 / ( 4 · 2 8 ) = 2000 . The values for Q are 13,000 and 26,000 for n = 2 and 4, respectively. To show a better comparison, we also implemented the two methods listed in references [24,27]; the experimental results are shown to compare with our algorithm.

4.1. Embedding Capacity and Averaged SNR

As listed in Table 1, the embedding capacities for n = 2 and 4 are 1000 and 500 bits, which satisfy the IFPI requirement—providing at least 20 bps (200 bits/10 s) embedding capacity. However, if the group size is greater than 16, this requirement is violated. Since we aim to present an optimization model for DWT multi-coefficients in this study, the resulting SNR of the proposed method clearly shows our SNR is much better than those SNRs using the methods in [24,27].

4.2. Robustness Measurement

We used five types of common attacks: re-sampling, low-pass filtering, amplitude scaling, time scaling, and MP3 compression, to evaluate how robust the proposed algorithm is. The performance quality is measured according to the averaged BER and its standard deviation (SD). A detailed discussion is illustrated below.
(1)
Re-sampling: In the re-sampling process, the sampling rate of an audio signal can be increased (up-sample) or decreased (down-sample) in three stages: (i) down-sample, (ii) interpolation, and (iii) up-sample. We down-sampled the sampling rate of embedded audios from 44.1 kHz to 22.05 kHz, then up-sampled them from 22.05 kHz back to 44.1 kHz with a linear interpolation filter. A similar approach allowed the sampling rates to change from 44.1 kHz to 11.025 kHz and 8 kHz and regain the original rate of 44.1 kHz. Table 2 shows the BER of testing re-sampling on audio signals. One can see that when the re-sampling rate is 8 kHz, the proposed embedding method has lower BER than those from implementations in [24,27]. In those cases when the re-sampling rates are 22.05 kHz and 11.025 kHz, the proposed method shows comparable robustness.
(2)
Low-pass filtering: Table 3 presents the BER while testing low-pass filters with cutoff frequencies of 3 kHz and 5 kHz. The BER results show that models in [24,27] have slightly higher robustness. Since both references [24,27] also adopted quantization-based embedding technique, the BER evaluation of the proposed method gives extremely similar results to theirs during the process of low-pass filtering.
(3)
MPEG Audio Layer-3 (MP3) compression: Table 4 shows the BER from testing MP3 compression with different bit rates on the embedded audio data. The BER values reflect that the proposed model has similar robustness to that in references [24,27].
(4)
Amplitude scaling: Since the amplitude-scaling attack usually results in saturation, in this study, we selected four distinct values for the amplitude-scaling factor: 0.5, 0.8, 1.1, and 1.2. The experimental results in Table 5 confirm that the proposed algorithm is much more robust than the methods in references [24,27].
(5)
Time scaling: Table 6 lists the BER from testing time-scaling attacks with a ±2% and ±5% range. The BER results show that our method has comparable robustness to those in references [24,27].
Based on the experimental outcomes and aforementioned discussions, the proposed method generally achieves high SNR and is almost zero-error against the amplitude-scaling attacks. However, it shows slightly lower robustness against low-pass filtering attacks and poor robustness against time-scaling attacks.

4.3. Compression Measurement

The purpose of compression is to have maximal compression ratio (CR) under maximal SNR, where CR is defined by
CR = Data   size   before   compression Data   size   after   compression
The threshold ε and the embedding strength Q directly affect compression ratio (CR) and SNR, respectively. As shown in Figure 2, it can be said that the larger the threshold and the embedding strength, the larger the compression ratio CR, that is, the better the compression effect, but the worse the SNR before and after compression.
Data in Table 7 show that the CR and SNR obtained without embedding information (denoted by N) vary under distinct threshold values. Such a result is consistent with the fact that when CR increases, SNR worsens. While signals are embedded, the result of the investigation of the relationship between the two is also in Table 7. From the experimental outcomes, we observed two noteworthy findings. First, with the same threshold value, if the embedding strength is higher, the overall audio values vary less. Though SNR seems worse, it demonstrates a more powerful embedding strength and a better compression effect. Second, in cases where threshold values change, we see that if the threshold value is greater than the embedding strength, the CR value becomes higher. That is to say, the compression effect enhancesand the relative decompression effect worsens, but the total effectiveness remains almost unchanged.
Moreover, to better understand the relationship between CR and SNR, we used the graph in Figure 3 to find appropriate values of threshold ε and embedding strength Q. We changed the threshold ε to obtain the relationship between CR and SNR using different markers by keeping green and blue fixed to include all the Q values obtained in Table 7. By doing this, we made sure the effect between CR and SNR remained optimized.

5. Conclusions

This study proposes a method to seek the integration between the information-hiding process and data compression for five types of commonly seen audio signals. Under the proposed model, simulation results demonstrated that each piece of hidden audio signal attains high SNR and showed strong robustness. SNRs of most hidden audios were more than 35, and some were even higher than 40. On the other hand, most BERs were as low as 5% or less. In addition, we obtained the relationship between CR and SNR with embedded private information and observed two critical outcomes. First, with a fixed threshold value, a high embedding strength makes the differences between the overall audio values smaller. Such an algorithm shows better embedding strength and enhanced compression effects but reflects worse SNR values. Second, when playing with distinct threshold values, we found that if the threshold value ε is set higher than the embedding strength Q, the CR value drops. That means the compression effect becomes better and the relative decompression effect worsens, but the total effectiveness remains almost unchanged.

Author Contributions

Conceptualization: S.-T.C.; methodology: S.-T.C.; software: S.-T.C.; validation: S.-Y.T. and M.Z.; Writing—review and editing: S.-T.C. and S.-Y.T. All authors have read and agreed to the published version of the manuscript.

Funding

This research received no external funding.

Institutional Review Board Statement

Not applicable.

Informed Consent Statement

Not applicable.

Data Availability Statement

Not applicable.

Conflicts of Interest

The authors declare no conflict of interest.

Appendix A

Theorem A1.
Let A be a matrix of size n × n. If X ¯ and X are n × 1 column vectors, then the following statement holds [30,31]:
A X ¯ X ¯ = A ,
( X ¯ X ) T ( X ¯ X ) X ¯ = 2 ( X ¯ X ) .
Theorem A2.
Suppose that g is a continuously differentiable function of X on a subset of the domain of a function f. If X0minimizes (or maximizes) f(X) subjected to the constraint g(X) = 0, thenf(X0) andg(X0) are parallel and one is a constant multiple of another. That is, ifg(X0) ≠ 0, then there exists a non-zero scalar λ such that
f ( X 0 ) = λ g ( X 0 ) .
Based on Theorem A2, if an augmented function is defined as
H ( X , λ ) = f ( X ) + λ g ( X ) ,
then an optimal solution of the optimization problem is to compute the extreme of the unconstraint function H(X, λ). The necessary conditions to ensure the existence of the extreme of H are [30,31]
H λ = 0 , H X = 0

References

  1. Chang, C.-L.; Chang, C.-Y.; Tang, Z.-Y.; Chen, S.-T. High-Efficiency Automatic Recharging Mechanism for Cleaning Robot Using Multi-Sensor. Sensors 2018, 18, 11. [Google Scholar] [CrossRef] [PubMed]
  2. Chang, C.-L.; Chen, C.-J.; Lee, H.-T.; Chang, C.-Y.; Chen, S.-T. Bounding the Sensing Data Collection Time with Ring-based Routing for Industrial Wireless Sensor Networks. J. Internet Technol. 2020, 21, 673–680. [Google Scholar]
  3. Zuo, Z.; Liu, L.; Zhang, L.; Fang, Y. Indoor Positioning Based on Bluetooth Low-Energy Beacons Adopting Graph Optimization. Sensors 2018, 18, 3736. [Google Scholar] [CrossRef]
  4. Chang, C.-L.; Chen, S.-T.; Chang, C.-Y.; Jhou, Y.-C. The Application of Machine Learning in Air Hockey Interactive Control System. Sensors 2020, 18, 7233. [Google Scholar] [CrossRef]
  5. Lin, S.-J.; Chen, S.-T. Enhance the perception of easy-to-fall and apply the Internet of Things to fall prediction and protection. J. Healthc. Commun. 2020, 5, 52. [Google Scholar]
  6. Zhang, X.; Zhang, S.; Huai, S. Low-Power Indoor Positioning Algorithm Based on iBeacon Network. Hindawi Complex. 2021, 2021, 8475339. [Google Scholar] [CrossRef]
  7. Zhou, C.; Yuan, J.; Liu, H.; Qiu, J. Bluetooth indoor positioning based on RSSI and Kalman filter. Wirel. Pers. Commun. 2017, 96, 4115–4130. [Google Scholar] [CrossRef]
  8. Song, W.; Lee, H.M.; Lee, S.H.; Choi, M.H.; Hong, M. Implementation of android application for indoor positioning system with estimote BLE beacons. J. Internet Technol. 2018, 19, 871–878. [Google Scholar]
  9. Cui, L.; Yang, S.; Chen, F.; Ming, Z.; Lu, N.; Qin, J. A survey on application of machine learning for Internet of Things. Int. J. Mach. Learn. Cyber. 2018, 9, 1399–1417. [Google Scholar] [CrossRef]
  10. Baldini, G.; Dimc, F.; Kamnik, R.; Steri, G.; Giuliani, R.; Gentile, C. Identification of mobile phones using the built-in magnetometers stimulated by motion patterns. Sensors 2017, 17, 783. [Google Scholar] [CrossRef]
  11. IFPI (International Federation of the Phonographic Industry). Available online: http://www.ifpi.org (accessed on 10 January 2021).
  12. Katzenbeisser, S.; Petitcolas, F.A.P. (Eds.) Information Hiding Techniques for Steganography and Digital Watermarking; Artech House, Inc.: Norwood, MA, USA, 2000. [Google Scholar]
  13. Al-Haj, A.; Mohammad, A.A.; Bata, L. DWT-based audio watermarking. Int. Arab. J. Inf. Technol. 2011, 8, 326–333. [Google Scholar]
  14. Xiang, S. Robust audio watermarking against the D/A and A/D conversions. EURASIP J. Adv. Signal Processing 2011, 3, 29. [Google Scholar] [CrossRef]
  15. Chen, S.-T.; Wu, G.-D.; Huang, H.-N. Wavelet-Domain Audio Watermarking Scheme Using Optimization-Based Quantization. IET Signal Processing 2010, 4, 720–727. [Google Scholar] [CrossRef]
  16. Noriega, R.M.; Nakano, M.; Kurkoski, B.; Yamaguchi, K. High Payload Audio Watermarking: Toward Channel Characterization of MP3 Compression. J. Inf. Hiding Multimed. Signal Process. 2011, 2, 91–107. [Google Scholar]
  17. Mishra, J.; Patil, M.V.; Chitode, J.S. An Effective Audio Watermarking using DWT-SVD. Int. J. Comput. Appl. 2013, 70, 6–11. [Google Scholar] [CrossRef]
  18. Zhao, M.; Pan, J.-S.; Chen, S.-T. Entropy-Based Audio Watermarking via the Point of View on the Compact Particle Swarm Optimization. J. Internet Technol. 2015, 16, 485–495. [Google Scholar]
  19. Darabkh, A.K. Imperceptible and Robust DWT-SVD-Based Digital Audio Watermarking Algorithm. J. Softw. Eng. Appl. 2014, 7, 859–871. [Google Scholar] [CrossRef]
  20. Chen, S.-T.; Guo, Y.-J.; Huang, H.-N.; Kung, W.-M.; Tseng, K.-K.; Tu, S.-Y. Hiding Patients Confidential Data in the ECG Signal via a Transform-Domain Quantization Scheme. J. Med. Syst. 2014, 38, 54. [Google Scholar] [CrossRef]
  21. Zear, A.; Singh, A.K.; Kumar, P. A proposed secure multiple watermarking technique based on DWT, DCT and SVD for application in medicine. Multimed. Tools Appl. 2016, 77, 4863–4882. [Google Scholar] [CrossRef]
  22. Wu, Q.; Wu, M. A Novel Robust Audio Watermarking Algorithm by Modifying the Average Amplitude in Transform Domain. Appl. Sci. 2018, 8, 723. [Google Scholar] [CrossRef]
  23. Karajeh, H.; Khatib, T.; Rajab, L.; Maqableh, M. A robust digital audio watermarking scheme based on DWT and Schur decomposition. Multimed. Tools Appl. 2019, 78, 18395–18418. [Google Scholar] [CrossRef]
  24. Chen, S.-T.; Huang, H.-N. Optimization-Based Audio Watermarking with Integrated Quantization Embedding. Multimed. Tools Appl. 2016, 75, 4735–4751. [Google Scholar] [CrossRef]
  25. Shankar, T.; Yamuna, G. Optimization Based Audio Watermarking using Discrete Wavelet Transform and Singular Value Decomposition. Int. J. Electron. Electr. Comput. Syst. 2017, 6, 375–379. [Google Scholar]
  26. Dhar, P.K.; Shimamura, T. Blind Audio Watermarking in Transform Domain Based on Singular Value Decomposition and Exponential-Log Operations. Radio Eng. 2017, 26, 552–561. [Google Scholar] [CrossRef]
  27. Li, J.-F.; Wang, H.-X.; Wu, T.; Sun, X.-M.; Qian, Q. Norm ratio-based audio watermarking scheme in DWT domain. Multimed. Tools Appl. 2018, 77, 14481–14497. [Google Scholar] [CrossRef]
  28. Mallat, S. A theory for multiresolution signal decomposition: The wavelet representation. IEEE Trans. Pattern Anal. Mach. Intel. 1989, 11, 674–693. [Google Scholar] [CrossRef] [Green Version]
  29. Burrus, C.S.; Gopinath, R.A.; Gao, H. Introduction to Wavelet Theory and Its Application; Prentice-Hall: Hoboken, NJ, USA, 1998. [Google Scholar]
  30. Lewis, F.L. Optimal Control; John Wiley and Sons: New York, NY, USA, 1986. [Google Scholar]
  31. Bartle, R.G. The Elements of Real Analysis, 2nd ed.; Wiley: New York, NY, USA, 1976. [Google Scholar]
Figure 1. The block diagram of the proposed algorithm.
Figure 1. The block diagram of the proposed algorithm.
Sensors 22 06548 g001
Figure 2. Comparison among the original audio, compressed audio, and decompressed audio in 1 and 100 audio samples with a threshold value of 500 and with/without embedding private information. (a) Original audio. (b) Compressed audio with threshold value of 500. (c) Recovering the compressed audio in (b). (d) Compressed audio with threshold value of 500 and embedding private information of embedding strength Q = 1000. (e) Recovering the compressed audio in (d).
Figure 2. Comparison among the original audio, compressed audio, and decompressed audio in 1 and 100 audio samples with a threshold value of 500 and with/without embedding private information. (a) Original audio. (b) Compressed audio with threshold value of 500. (c) Recovering the compressed audio in (b). (d) Compressed audio with threshold value of 500 and embedding private information of embedding strength Q = 1000. (e) Recovering the compressed audio in (d).
Sensors 22 06548 g002aSensors 22 06548 g002b
Figure 3. Changing the threshold ε to obtain the relationship between CR and SNR using different markers by keeping green and blue fixed to include all the Q values in Table 7.
Figure 3. Changing the threshold ε to obtain the relationship between CR and SNR using different markers by keeping green and blue fixed to include all the Q values in Table 7.
Sensors 22 06548 g003
Table 1. Embedding capacity and SNR.
Table 1. Embedding capacity and SNR.
Number of Consecutive Coefficients in DWT Level 8 Embedding Capacity
(bits/11.6 s)
Averaged SNR (dB)
DanceLove SongFolkloreSymphony
Reference [24]n = 2100035.833.427.926.3
n = 450037.733.528.626.2
Reference [27]n = 2100024.325.423.222.3
n = 450024.126.023.622.9
Proposed
Method
n = 2100038.335.628.727.5
n = 450037.141.334.533.2
Table 2. BER of Testing Re-sampling.
Table 2. BER of Testing Re-sampling.
Audio Type Dance FolkloreLove SongSymphony
Re-Sampling Rate (kHz)22.0511.025822.0511.025822.0511.025822.0511.0258
Reference
[24]
n = 2mean 8.3213.3114.010.744.224.365.743.082.390.784.524.76
SD0.400.430.410.230.280.260.160.150.160.160.260.28
n = 4mean 2.368.018.010.201.241.260.721.051.060.321.291.29
SD0.250.380.360.180.210.190.100.120.120.130.210.21
Reference
[27]
mean 9.1415.2615.310.744.224.365.743.082.390.784.524.76
n = 2SD0.410.430.420.190.240.270.170.140.130.140.270.25
n = 4mean 2.178.038.040.231.211.310.621.211.020.351.271.28
SD0.210.390.370.150.200.190.110.120.110.130.220.21
Proposed method n = 2mean 8.2514.420.870.824.654.354.873.290.680.661.281.28
SD0.390.40.160.180.220.210.150.160.090.140.210.21
n = 4mean 2.18.160.260.231.421.251.341.450.570.531.241.23
SD0.230.380.140.160.190.150.130.110.080.120.200.19
Table 3. BER of Testing Low-pass Filtering.
Table 3. BER of Testing Low-pass Filtering.
Audio Type Love SongSymphonyDanceFolklore
Cutoff Frequency3 kHz5 kHz3 kHz5 kHz3 kHz5 kHz3 kHz5 kHz
Reference [24]n = 2mean 24.1825.8227.588.6833.6221.5233.6215.72
SD0.280.220.290.200.350.290.340.21
n = 4mean 23.8223.4827.558.4133.2521.2833.0213.84
SD0.270.190.280.200.360.270.340.19
Reference [27]n = 2mean 26.1825.8227.538.6833.6221.5233.6215.72
SD0.290.210.280.190.350.270.330.19
n = 4mean 25.8224.8127.548.4133.0220.8733.0211.84
SD0.290.210.250.210.340.280.330.17
Proposed method n = 2mean 22.8423.6327.858.3832.2820.0331.8213.32
SD0.270.190.270.180.350.260.290.15
n = 4mean 21.4223.6327.548.2530.3920.0232.5013.15
SD0.250.180.240.190.330.250.300.16
Table 4. BER of Testing MP3 compression.
Table 4. BER of Testing MP3 compression.
Audio Type Love SongSymphonyDanceFolklore
Bit Rate (kbps)1281129680128112968012811296801281129680
Reference
[24]
n = 2mean 0.161.382.092.720.351.452.443.170.742.112.123.020.361.482.423.12
SD0.110.110.130.150.120.130.140.170.150.180.180.210.120.150.140.16
n = 4mean 0.090.111.412.530.140.152.293.840.110.151.023.00.150.152.403.93
SD0.100.090.120.150.100.100.130.160.120.130.170.230.110.100.130.17
Reference
[27]
n = 2mean 0.151.322.132.730.271.452.443.170.752.132.163.020.361.462.433.14
SD0.110.130.130.140.130.140.150.150.140.180.190.200.130.130.140.15
n = 4mean 0.090.111.422.530.140.172.323.740.110.161.023.00.150.132.403.02
SD0.100.100.110.150.120.090.130.150.130.150.150.200.120.080.130.15
Proposed method n = 2mean 0.752.672.913.310.180.152.293.920.832.462.542.620.452.132.643.25
SD0.130.140.140.150.140.080.120.150.160.180.190.190.140.120.120.16
n = 4mean 0.692.232.242.280.170.121.932.090.150.132.482.490.391.941.951.94
SD0.120.140.130.130.130.060.090.120.150.150.160.160.130.130.090.10
Table 5. BER of Testing Amplitude Scaling.
Table 5. BER of Testing Amplitude Scaling.
Audio TypeLove SongSymphonyDanceFolklore
Amplitude Modification Factor0.50.81.11.20.50.81.11.20.50.81.11.20.50.81.11.2
Reference
[24]
n = 247.2545.5541.4043.8548.0038.7223.6324.5443.1241.4040.1540.8445.9043.5242.5442.86
n = 443.8240.6340.8441.2545.2232.0423.1523.5642.3341.0239.5640.1642.5241.8641.3541.24
Reference
[27]
n = 240.0232.1531.1833.6538.0631.2228.1328.5538.9231.4132.1034.2439.8233.1232.7432.62
n = 438.2230.6330.8431.2535.2232.0423.1523.5640.0231.1130.5130.4632.4226.8124.7524.26
Proposed method n = 22.031.151.081.131.650.971.431.452.851.761.852.061.671.310.931.32
n = 40.970.860.840.921.140.880.920.982.041.560.981.931.050.860.830.85
Table 6. BER of Testing Time Scaling.
Table 6. BER of Testing Time Scaling.
Audio Type Love SongSymphonyDanceFolklore
Time-Scaling (%) −5−225−5−225−5−225−5−225
Reference
[24]
n = 2mean 47.1142.9143.6746.3242.7437.8246.4246.1945.1840.2146.5847.9843.1839.1246.3547.43
SD0.100.090.080.090.110.110.090.100.120.120.110.130.120.120.100.12
n = 4mean 47.0440.2345.1146.5843.1136.6446.2446.8644.3739.9144.9247.9843.0338.6246.5347.54
SD0.080.070.070.080.090.100.090.100.120.110.110.120.130.120.090.10
Reference
[27]
n = 2mean 48.2445.0341.1342.6242.2440.7343.6245.2146.2942.0744.9845.1844.1540.2245.3947.37
SD0.090.080.070.080.090.100.080.090.130.130.120.140.130.120.110.11
n = 4mean 46.1241.2544.0144.5242.0138.3445.2745.8945.2740.9145.0246.1342.5339.2445.6345.58
SD0.080.080.080.070.100.090.090.090.120.130.110.140.130.110.100.09
Proposed method n = 2mean 47.2342.0543.5345.1542.3237.6445.1846.2145.3540.4246.2446.4743.1838.9346.4147.13
SD0.070.070.060.070.080.090.090.100.110.100.100.130.120.110.080.09
n = 4mean 46.4340.0844.3746.5443.0636.8346.3246.2544.1439.6544.7847.9542.2538.2646.4246.37
SD0.070.050.050.060.080.090.070.080.110.110.100.120.100.100.080.08
Table 7. Relationship between CR and SNR with and without embedding private information (N).
Table 7. Relationship between CR and SNR with and without embedding private information (N).
Threshold εQCRSNR before Decompression
0.111.001636.2503
1001.017338.9726
5001.090531.7549
10001.190028.0046
20481.418723.0792
40961.897017.4999
1011.002835.2693
1001.017338.9726
5001.090531.7549
10001.190028.0046
20481.418723.0792
40961.897017.4999
10011.031434.1682
1001.017338.9726
5001.090531.7549
10001.190028.0046
20481.418723.0792
40961.897017.4999
50011.195325.4036
1001.141529.6514
5001.090531.7549
10001.190028.0046
20481.418723.0792
40961.897017.4999
100011.430822.1784
1001.411525.8566
5001.309227.0426
10001.190028.0046
20481.418723.0792
40961.897017.4999
Publisher’s Note: MDPI stays neutral with regard to jurisdictional claims in published maps and institutional affiliations.

Share and Cite

MDPI and ACS Style

Zhao, M.; Chen, S.-T.; Tu, S.-Y. Wavelet-Domain Information-Hiding Technology with High-Quality Audio Signals on MEMS Sensors. Sensors 2022, 22, 6548. https://doi.org/10.3390/s22176548

AMA Style

Zhao M, Chen S-T, Tu S-Y. Wavelet-Domain Information-Hiding Technology with High-Quality Audio Signals on MEMS Sensors. Sensors. 2022; 22(17):6548. https://doi.org/10.3390/s22176548

Chicago/Turabian Style

Zhao, Ming, Shuo-Tsung Chen, and Shu-Yi Tu. 2022. "Wavelet-Domain Information-Hiding Technology with High-Quality Audio Signals on MEMS Sensors" Sensors 22, no. 17: 6548. https://doi.org/10.3390/s22176548

Note that from the first issue of 2016, this journal uses article numbers instead of page numbers. See further details here.

Article Metrics

Back to TopTop