Next Article in Journal
Impact of Current Pulsation on BLDC Motor Parameters
Next Article in Special Issue
A Radiometric Technique for Monitoring the Desulfurization Process of Blister Copper
Previous Article in Journal
Simulations of Switchback, Fragmentation and Sunspot Pair in δ-Sunspots during Magnetic Flux Emergence
Previous Article in Special Issue
High-Precision Single-Photon Laser Time Transfer with Temperature Drift Post-Compensation
 
 
Font Type:
Arial Georgia Verdana
Font Size:
Aa Aa Aa
Line Spacing:
Column Width:
Background:
Article

Using an Optimization Algorithm to Detect Hidden Waveforms of Signals

1
Department of Medical Informatics, Chung Shan Medical University, Taichung 40201, Taiwan
2
Department of Information Engineering and Computer Science, Feng Chia University, Taichung 40724, Taiwan
3
School of Computer Science and Technology, Hangzhou Dianzi University, Hangzhou 310018, China
*
Author to whom correspondence should be addressed.
Sensors 2021, 21(2), 588; https://doi.org/10.3390/s21020588
Submission received: 14 December 2020 / Revised: 8 January 2021 / Accepted: 12 January 2021 / Published: 15 January 2021

Abstract

:
Source signals often contain various hidden waveforms, which further provide precious information. Therefore, detecting and capturing these waveforms is very important. For signal decomposition (SD), discrete Fourier transform (DFT) and empirical mode decomposition (EMD) are two main tools. They both can easily decompose any source signal into different components. DFT is based on Cosine functions; EMD is based on a collection of intrinsic mode functions (IMFs). With the help of Cosine functions and IMFs respectively, DFT and EMD can extract additional information from sensed signals. However, due to a considerably finite frequency resolution, EMD easily causes frequency mixing. Although DFT has a larger frequency resolution than EMD, its resolution is also finite. To effectively detect and capture hidden waveforms, we use an optimization algorithm, differential evolution (DE), to decompose. The technique is called SD by DE (SDDE). In contrast, SDDE has an infinite frequency resolution, and hence it has the opportunity to exactly decompose. Our proposed SDDE approach is the first tool of directly applying an optimization algorithm to signal decomposition in which the main components of source signals can be determined. For source signals from four combinations of three periodic waves, our experimental results in the absence of noise show that the proposed SDDE approach can exactly or almost exactly determine their corresponding separate components. Even in the presence of white noise, our proposed SDDE approach is still able to determine the main components. However, DFT usually generates spurious main components; EMD cannot decompose well and is easily affected by white noise. According to the superior experimental performance, our proposed SDDE approach can be widely used in the future to explore various signals for more valuable information.

1. Introduction

Signal decomposition (SD) is a very useful tool for analyzing the components of source signals in order to determine interesting and meaningful hidden signal patterns. When analyzing a signal, we usually approximate the signal or function by “atoms,” which may be sinusoids, wavelets, or Gabor functions [1]. In order to find these qualified “atoms” to represent the function, an effective algorithm must be adopted. In general, we choose some “atoms” from a larger collection of “atoms” called a “dictionary” [1]. “Atoms” are signal components—especially, meaningful signal components—and the “dictionary” is a collection of signal components.
In general, source signals contain some precious information in hidden waveforms, and therefore SD plays a very important role in signal analysis. Unlike traditional analysis tools, such as Fourier transform (FT) and series [2], as well as wavelet transform (WT) [1,3], empirical mode decomposition (EMD) [4,5,6] has attracted a great deal of attention over the past two decades as it is a data-driven decomposition tool without preset basis functions.
Huang et al. [4] first reviewed non-stationary data processing methods, such as FT and WT, then discussed their potential problems, and finally proposed their outstanding EMD and Hilbert spectra. In the work by Huang and Shen [6], many related theories and applications were provided. In addition, various fields have also shown the practical value of EMD, including in medicine [7,8], hydrology [9], mechanics [10,11], civil engineering [12] and theoretical analysis [13].
EMD can directly, adaptively and quickly decompose a source signal into a collection of intrinsic mode functions (IMFs) [4,6]; each IMF must satisfy two conditions: (1) in the whole data set, the number of extrema and the number of zero crossings must either equal or differ at most by one; and (2) at any point, the mean value of the envelope defined by the local maxima and the envelope defined by the local minima is zero. Therefore, each IMF can also be viewed as special basis functions; they are directly derived from the data rather than being predetermined by basis functions such as FT and WT.
IMFs are derived through the following sifting process. First, the extrema of a signal are identified; second, all the local maxima are connected by a cubic spline line as the upper envelope and all the local minima are connected by a cubic spline line as the lower envelope; finally, their mean is calculated, and the difference between the data and the mean is the first component. Repeat the procedure until two conditions of IMFs are satisfied.
IMFs are in turn generated from highest-frequency to lowest-frequency components and finally to the residue. Indeed, these data-derived IMFs sometimes contain some meaningful and valuable signal patterns, but some components are often hard to explain. Therefore, they generally need some auxiliary tools, such as time–frequency or time–frequency–energy distribution, for clarification.
Furthermore, EMD-related methods are prone to mode mixing [14,15,16], end effects [14,15] and detrend uncertainty [15]. Therefore, a new noise-assisted data analysis tool, called ensemble empirical mode decomposition (EEMD) [14], was proposed to reduce mode mixing and end effects. EEMD consists of one part different from EMD: it sifts an ensemble of white noise-added signal (data), and then treats the mean as the final true result. The aim of finite amplitude white noise is to force the ensemble to exhaust all possible solutions in the sifting process.
Further, compact empirical mode decomposition (CMED) [15] was proposed to reduce mode mixing, end effects and detrend uncertainty. CMED is composed mainly of two parts: one is to use highest-frequency sampling to generate pseudo extrema for effectively identifying upper and lower envelopes; the other is to use a set of 2N (for N data points) algebraic equations for determining the maximum (minimum) envelope at each decomposition step.
Zhang et al. [16], with the help of an improved genetic algorithm (GA), also proposed an improved ensemble empirical mode decomposition method (GAEEMD) to solve mode mixing. In the improved GA, Zhang et al. used a difference selection operator instead of a traditional selection operator (roulette selection or tournament selection) and selected the amplitudes of the added white noise and the number of trials as the parameters of their fitness function, which was the reciprocal of an orthogonal index concerning the decomposed IMFs. More exactly, GAEEMD applies an improved GA in the IMFs obtained from the sifting process of EEMD, rather than directly in the source signal; in other words, it indirectly applies an improved GA in the source signal.
For a wide range of applications, EMD has been further extended for use in two-variant (bivariate) [17], three-variant (trivariate) [18] and multiple-variant (multivariate) [19] signals. These methods can extract two-dimensional to multiple-dimensional common oscillatory modes and facilitate the fusion of information from two or multiple sources.
Different from EMD-like methods, Singh et al. [20] also proposed an adaptive decomposition method, which was based on Fourier theory, called the Fourier decomposition method (FDM); it can decompose any data into a small number of Fourier intrinsic band functions (FIBFs) and thus can be viewed as a generalized Fourier expansion with variable amplitudes and frequencies.
Since EMD and FDM act essentially as a dyadic filter bank [21,22,23], some waves within the same dyadic filter bank cannot be decomposed into separate waves, even simple periodic waves. To solve this problem, we proposed a direct decomposition tool to decompose the source signals into separate components.
Similar to GAEEMD, our proposed method also applies another nature-inspired optimization algorithm to search the optimal parameters; in contrast to GAEEMD, we adopted a standard differential evolution (DE) algorithm, which is a real-coded algorithm, rather than a traditional GA, which is a binary-coded algorithm. In addition, we directly selected the amplitudes, frequencies and phases of the source signal as the parameters of the fitness function, which is the mean squared error (MSE) between the source signal and its searched signal.
At a pioneering stage, we selected four combinations of three periodic waves as source signals, including sinusoidal, square and triangular waves. They were a combination of three sinusoidal waves (continuous and smooth), a combination of three square waves (non-continuous and also non-smooth), a combination of triangular waves (continuous but non-smooth) and a composite of the three above-mentioned waves (non-continuous and non-smooth).
In the absence of noise, experimental results show that our proposed SD by DE (SDDE) can perfectly or almost perfectly decompose these four source signals into corresponding separate waves, where their corresponding amplitudes, frequencies and phases are also exactly or almost exactly obtained. In contrast, EMD, limited by a considerably finite frequency resolution, tends to view these simple periodic signal combinations as special basis functions, and it therefore cannot further decompose them into separate components. Similarly, limited by a finite frequency resolution, discrete Fourier transform (DFT) usually generates spurious main components.
Even in the presence of white noise, our proposed SDDE approach can still determine the main signal components; however, EMD is easily affected by white noise, and it therefore cannot decompose contaminated signals into separate components. Likewise, DFT easily generates more spurious main components.
The rest of the paper is organized as follows. Section 2 outlines some related tools for SD and DE. Section 3 presents the problem formulation of the paper. Section 4 gives experimental results and a detailed discussion. Finally, Section 5 concludes with some observations and possible developments for the future.

2. Related Tools

In this section, the basics of SD and the standard procedure of DE will be mentioned; some tools of SD include the Fourier series and transform, wavelet transform and EMD.
In the case of SD, a signal x t or function f t can often be analyzed, described, or processed by a linear decomposition [3]
f t = l a l ψ l t ,
where l is an integer index for a finite or an infinite sum; a l represents the real-valued expansion coefficients; and ψ l t is a set of real-valued functions of t , called the expansion set. If Equation (1) is unique, the set is called a basis for the class of functions that can be expressed in that way. If the basis is orthogonal, meaning:
ψ k t , ψ l t = ψ k t ψ l t d t = 0 ,   k l ,
then the coefficients can be calculated by the inner product:
a k = f t , ψ k t = f t ψ k t d t .
For the Fourier series [2], the orthogonal basis functions ψ k t are sin k ω 0 t and cos k ω 0 t with frequencies of k ω 0 . These are especially suitable for periodic, time-invariant or stationary signals. When the period approaches infinity, the Fourier series tends to the Fourier transform [2].
For the wavelet expansion, a two-parameter system is constructed so that Equation (1) becomes:
f ( t ) = k j a ψ j , k j , k ( t ) ,
where both j and k are integer indices and ψ j , k t represents the wavelet expansion functions that usually form an orthogonal basis.
The set of expansion coefficients a j , k are called the discrete wavelet transform (DWT) of f t , and Equation (4) is its inverse transform.
Wavelet transforms are especially suitable for transient, nonstationary or time-varying signals; for applications with nonlinear and non-stationary signals, empirical mode decomposition [4,5,6] is more suitable. The method is data-driven without any basis functions presumed.
Through the sifting process, a signal x t or function f t can be expanded or decomposed as follows:
f t = i = 1 n c i + r n ,
where c i represents the intrinsic mode functions and r n is a residue.
Overall, c 1 should contain the highest-frequency component of the signal; c n should contain the lowest-frequency component; and r n should be either a trend or a constant.
In the case of DE, a d-dimensional optimization problem was considered. In our experiments, we used a population size, n, of solution vectors, x i , i = 1 , 2 , , n , to search for the optimal solution. These solutions can be represented by x j , i , j = 1 , 2 , , d (the dimension of a solution is d ) and i = 1 , 2 , , n . For generation t, the solution vector x i is denoted by x i t . Initially, x i 0 values are uniformly generated between the lower bound of the domain, x min and the upper bound, x max .
Typically, a DE algorithm [24,25,26,27,28,29] contains three main steps: mutation, crossover and selection. In the step of mutation, three random independent indices r 1 , r 2 and r 3 ranging from 1 to n are first selected—i.e., r 1 r 2 r 3 —and their corresponding solution vectors, x r 1 , x r 2 and x r 3 are then obtained. Finally, a mutant vector [26] or a donor vector [27] is generated,
v i t + 1 = x r 1 t + F x r 2 t x r 3 t ,
where F, the differential weight, is a positive real number that normally lies between 0 and 1. In addition, these three random independent indices also meet the constraint r 1 r 2 r 3 i .
In the crossover step, a crossover vector or a trial vector [26] via the binominal scheme is obtained as follows:
u j , i t + 1 = v j , i t + 1 if r j , i C r or j = I j r x j , i t otherwise ,
where C r is the crossover rate and I j r is a random index, i.e., an integer ranging from 1 to d.
In the step of selection, a greedy scheme is adopted. For our minimization problem, this is mathematically described as follows:
x i t + 1 = { u i t + 1 if f ( u i t + 1 ) f ( x i t ) x i t otherwise .
For the purpose of this study, the fitness or objective function f x is the MSE between the source signal and the searched signal, i.e., a combination of searched waves.

3. Problem Formulation

Even EEMD can deal with some mode-mixing problems of EMD, but it cannot decompose some source signals with a combination of single or complex waveforms into separate components. The main reason for this is that EMD and EEMD act essentially as a dyadic filter bank [21,22,23]. Therefore, when these single or complex waveforms lie within the same dyadic filter bank, they cannot be well decomposed into separate components; even for the source signals with a combination of simple periodic waves, EMD-like data-driven methods cannot decompose them well, which is especially true for non-continuous waves.
Therefore, the main purpose of this paper is to propose an effective technique to decompose this kind of source signal into separate components. A feasible approach is to directly rather than indirectly use nature-inspired optimization algorithms. In this paper, DE is adopted because it is a real-coded algorithm, which is much more efficient than a binary-coded algorithm; binary-coded algorithms need additional computational time to transform binary code into real code and vice versa. An appropriate objective function plays a very important role in optimization algorithms. In this work, we choose the MSE between a source signal and its corresponding searched signal as our objective function. Therefore, the source signals and their corresponding objective functions are presented in the following subsections.

3.1. Source Signals

As a pioneering work, this paper only considers four simple yet representative source signals, consisting of three combinations of single waves and one combination of composite waves. These three single waves are sinusoidal, square, and triangular waves; the remaining one is a composite of the previous three single waves. Each source signal is composed of three components. Based on each individual source wave, the objective function is defined accordingly.

3.1.1. Sinusoidal Waves

As components of the Fourier transform [2], sinusoidal waves are the most representative of signals. Thus, the first source signal is a signal synthesis of three sinusoidal waves as follows:
s t = k = 1 3 a k sin 2 π f k t + p k ,
where the a k values for k = 1 , 2 , 3 , are amplitudes; f k values are normalized frequencies (simply called frequencies in the following) lying between 0 and 1; p k values are phases lying between 0 and 2 π ; and t is an integer from 0 to L 1 , where L is the length of the signal series.

3.1.2. Square Waves

The second signal source is a signal synthesis of three square waves as follows:
s t = k = 1 3 a k square 2 π f k t + p k ,
where the definitions of the a k , f k , and p k values for k = 1 , 2 , 3 as well as t are the same as those of Equation (9).

3.1.3. Triangular Waves

The third signal source is a signal synthesis of three triangular waves as follows:
s t = k = 1 3 a k triangle 2 π f k t + p k .
Likewise, the definitions of all symbols are the same as those of Equation (9).

3.1.4. Composite Waves

The fourth signal source is a signal synthesis of the previous three waves as follows:
s t = a 1 sin 2 π f 1 t + p 1 + a 2 square 2 π f 2 t + p 2 + a 3 triangle 2 π f 3 t + p 3 .
The definitions of all symbols are also the same as those of Equation (9).

3.2. Objective Function

The formulas of Equations (9)–(12) indicate that every wave consists of three parameters—the amplitude, frequency and phase—and thus each source signal contains nine unknown parameters; or its dimension is nine, i.e., d = 9 . Therefore, we set the x i , i = 1 , 2 , , 9 as our searched parameters, and z t as the searched signal:
z t = x 1 sin 2 π x 2 t + x 3 + x 4 sin 2 π x 5 t + x 6 + x 7 sin 2 π x 8 t + x 9 .
It is reasonable and effective for the objective function to be defined as the MSE between the source signal and the searched signal as follows:
f x = 1 L t = 0 L 1 s t z t 2 ,
where s t is taken from Equation (9).
For other source signals, their objective functions are defined accordingly.

4. Experimental Results and Discussion

In this section, experimental parameters for DE are set, including the differential weight, the crossover rate, the dimension and the population size. Afterwards, our experiments are presented and then the corresponding experimental results are discussed.

4.1. Experimental Settings

Since each source signal consisted of nine parameters, its dimension of DE was nine, i.e., d = 9 . The amplitude of the first component was set to be 1, the frequency 0.01 and the phase 0; the amplitude of the second component was to be 2, the frequency 0.02 and the phase 0.125 π ; the amplitude of the third component was to be 3, the frequency 0.03 and the phase 0.25 π . For clarity, the first source signal is represented as follows:
s t = sin 2 π 0.01 t + 0 + 2 sin 2 π 0.02 t + 0.125 π + 3 sin 2 π 0.03 t + 0.25 π .
The population size, n , of DE was set to be 90, i.e., n = 90 ; each experiment was run 20 times, and the maximum number of iterations—the termination criterion—was set to be 15,000 for each run. In addition, the differential weight was set to be 0.5, and the crossover rate was 0.1.
For the domains of the amplitude, frequency and phase parameters, it is much easier to set; a good choice for the amplitude is the difference between the maximum and minimum values of the source signal, which is sufficient to cover all potential amplitudes. With the help of DFT, a better choice for the amplitude is two times the maximum value of all frequency amplitudes, which is sufficient to cover all potential amplitudes in a smaller range.
For normalized frequencies, the domain of frequency, [0, 0.5], and the domain of phase, [0, 2π], together will cover all possibilities of frequency and phase. Therefore, the domains of the amplitude, frequency, and phase were set to be [0, 10], [0, 0.5] and [0, 2π], respectively. All initial positions of each population were randomly and uniformly distributed in the search space.
During the search process, all members of the population were restricted in the search space; when any member of the population flew out of the domain, it was replaced by the upper or lower limit of the domain according to its solution. The operation was implemented by a clamping function of the position, lying between x min and x max , where x min is the lower limit and x max the upper limit.

4.2. Experimental Results

In the following subsections, four source signals are considered, consisting of three combinations of single waves and one combination of three waves. These three single waves are sinusoidal, square and triangular waves; the composite wave is composed of the previous three single waves.

4.2.1. Sinusoidal Waves

The first source signal, as in Equation (15), consisted of a combination of three sinusoidal waves, which were common continuous and smooth functions. Figure 1 shows the results through SDDE.
The first graph (in the first panel) of Figure 1a is the first source signal; the second graph (in the second panel) is the first original sinusoidal wave, a 1 , f 1 , p 1 = 1 , 0.01 , 0 ; the third graph (in the third panel) is the second original sinusoidal wave, a 2 , f 2 , p 2 = 2 , 0.02 , 0.125 π ; and the fourth graph (in the fourth panel) is the third original sinusoidal wave, a 3 , f 3 , p 3 = 3 , 0.03 , 0.25 π .
In order to understand the feasibility and potential of SDDE, we only show a plot with the best performance, i.e., the minimum error among 20 runs. Even in the primary stage, Table 1 shows that two runs can still achieve the true minimum error; therefore, the searched parameters of Run 7 were fetched to generate Figure 1b.
Obviously, Figure 1b shows the same results as Figure 1a; that is, we can find three exact components through SDDE. It is worth noting that even if we can determine the exact components, the order of appearance from the source signal is usually different from that from searched signal.
Except for two runs with the best zero error, Table 1 also shows the MSEs of 12 runs that were close to 0, while the other six runs had bad performance.
On the surface, it seems to be easy to decompose this smooth signal into separate components. For comparison, EMD was adopted to perform the decomposition task. Under the same source signal, its decomposition results are shown in Figure 2. The first graph of Figure 2 is the source signal; the second is the first IMF, simply denoted by IMF1; the third to fifth graphs are IMF2 to IMF4; and the final graph is the residue.
Compared to our proposed SDDE approach, Figure 2 shows that the waveforms of graphs 1 and 2 (the source signal and the highest-frequency component) are almost the same except for their amplitudes and boundaries. From another perspective, EMD automatically views the source signal as a special high-frequency component (IMF1), and its special low-frequency component (IMF2) is similar to a sinusoidal function. The results also reflect the fact that EMD acts essentially as a dyadic filter bank [21,22,23]. Therefore, some compound waves, even sinusoidal waves, cannot further be decomposed into separate waves when these components lie in the same dyadic filter bank.
In this case, the special composite high-frequency component consists of three sinusoidal waves. Because of the inherent characteristics of EMD, it cannot further decompose the special composite high-frequency component into separate components.

4.2.2. Square Waves

The second source signal consisted of a combination of three square waves, which were non-continuous and also non-smooth functions. Their amplitudes, frequencies and phases were the same as those of the sinusoidal waves. The formula of the second source signal is almost the same as Equation (15), except that the sinusoidal function is replaced by its corresponding square function.
Figure 3a shows the second source signal and its separate waves; Figure 3b shows the searched components through SDDE and their combination. Likewise, these graphs were produced based on the first set of parameters with the best performance, i.e., the parameters of Run 1.
Generally, non-continuous functions are difficult to decompose; however, this task is easy for our proposed SDDE. In fact, Table 1 shows that all 20 runs can achieve the true minimum error. Therefore, we can determine three exact components through SDDE.
Figure 3b shows almost the same waves as Figure 3a except for the order of appearance. The change of the order of appearance is due to the randomness of DE.
For comparison, EMD was used to decompose the second source signal; the decomposition results, consisting of eight panels or graphs, are shown in Figure 4. For visual consideration, Figure 4 is composed of two plots (Figure 4a,b); the four graphs of Figure 4a are the source signal and IMF1 to IMF3, from top to bottom; the four graphs of Figure 4b are IMF4 to IMF6 and the residue from top to bottom.
As expected, EMD cannot decompose the source signal into three square waves well. Since the square waves are non-continuous and also non-smooth functions, EMD produces an extremely complicated highest-frequency component (IMF1) in response to non-continuity and also non-smoothness. Except for the potential end effects of EMD, IMF2 to IMF3 can be viewed as two special periodic components; IMF4 is a relatively low-frequency component similar to a sinusoidal function.
Although EMD can decompose some hidden signal patterns in the case of a combination of periodic waves, these signal patterns are generally compound waves within the same dyadic filter bank [21,22,23], and they are therefore difficult to recognize and explain. For better understanding, it is necessary to further decompose them into separate signal patterns. Obviously, our proposed SDDE approach is much better than EMD in this case.

4.2.3. Triangular Waves

The third source signal consisted of a combination of three triangular waves, which were continuous but non-smooth functions. Likewise, their amplitudes, frequencies and phases were the same as those of the sinusoidal waves. The formula of the third source signal is almost the same as Equation (15) except that the sinusoidal function is replaced by its corresponding triangular function.
Figure 5a shows the third source signal and its separate waves; Figure 5b shows the searched components through SDDE and their combination. These graphs were also produced based on the first set of parameters with the best performance, i.e., the parameters of Run 9.
Table 1 shows that Run 9 exhibited the best performance. Except for tolerable computational error, we were almost able to determine three exact components through SDDE, and hence its result was chosen to generate Figure 5b. Not surprisingly, Figure 5b shows almost the same waves as Figure 5a except for the order of appearance.
Compared to the previous two source signals, Table 1 also shows that the third source signal is more difficult to search or decompose into its exact signal components. Even though only one of 20 runs exhibited close to zero error, SDDE shows potential for perfectly decomposing source signals.
Figure 6 shows the decomposition results of the third source signal by EMD. Similar to the first source signal, EMD essentially views the source signal as an intrinsic mode function, and thus it can only decompose the source signal into two main components: one is a scaled version of the source signal (IMF1), and the other is almost a sinusoidal function (IMF2). The results also reflect the fact that EMD acts essentially as a dyadic filter bank [21,22,23]. Therefore, some compound waves, even triangular waves, cannot further be decomposed into separate waves when these components lie in the same dyadic filter bank.

4.2.4. Composite Waves

The fourth source signal consisted of a combination of three different waves, which were also non-continuous and non-smooth functions. The first wave was the sinusoidal wave; the second was the square wave; and the third was the triangular wave. The amplitude, phase and frequency were the same as those of the first source signal. For clarity, the fourth source signal is represented as follows:
s t = sin 2 π 0.01 t + 0 + 2 square 2 π 0.02 t + 0.125 π + 3 triangle 2 π 0.03 t + 0.25 π .
Figure 7a shows the fourth source signal and its separate waves; Figure 7b shows the searched components through SDDE and their combination. Likewise, these graphs were produced based on the first set of parameters with the best performance, i.e., the parameters of Run 3.
Table 1 shows that Run 3 could achieve the true minimum error; thus, SDDE was able to decompose the source signal into three exact components. Obviously, Figure 7b shows the same waves as Figure 7a.
Similar to the second source signal, EMD cannot decompose the source signal into three separate waves. Its decomposition results are shown in Figure 8. For visual consideration, Figure 8 is composed of two plots (Figure 8a,b): the four graphs of Figure 8a are the source signal and IMF1 to IMF3 from top to bottom, and the four graphs of Figure 8b are IMF4 to IMF6 and the residue from top to bottom.
As with the second source signal, the fourth source signal contains a non-continuous square wave and a non-smooth triangular wave. Therefore, EMD also produces an extremely complicated highest-frequency component (IMF1) in response to non-continuity and non-smoothness. Likewise, IMF2 is also viewed as one special periodic component; except for the potential end effects, IMF3 and IMF4 are two relatively low-frequency components similar to a sinusoidal function; the frequency of IMF3 is about twice that of IMF4.
In the case of the fourth source signal, our proposed SDDE approach can directly decompose the source signal into three different waves, but EMD only can provide some hidden characteristics which need further analysis and explanations.

4.2.5. Signal Decomposition in the Presence of Noise

To show that our proposed SDDE approach can work in practical applications, in the following two experiments, we added 10 dB white noise to each source signal; the signal-to-noise ratio (SNR) [30] is defined by:
S N R = 10 log 10 P s / P n ,
where
P s = 1 N t = 0 N 1 s 2 ( t )
and the variance of the white noise is as follows:
P n = σ n 2 .
After we compute the power, P s , of the signal, the standard deviation of the white noise can be computed by the following formula:
σ n = P s / 10 S N R / 10 .
In the first experiment, we only knew the exact type of hidden signal components but did not know how many components existed. Since the dimension of true signal components was nine (three signal components, each with three parameters), we supposed that there were six signal components, and hence the dimension was a total of 18. Likewise, the population size, n , was set to be 90.
Table 2 lists four experimental results, where P r is the power of real noise. In the presence of noise, zero error is impossible. The error for each run will reflect the performance of SDDE; the closer the value to P r , the better the performance. These results show that SDDE could still determine hidden signal components in the presence of noise in all four cases.
To visualize the data in a limited space, we only show the decomposed results of the first source signal—a combination of three sinusoidal waves—in Figure 9. The first six graphs of Figure 9 show six possible sinusoidal waves, respectively. The first graph of Figure 9a is equivalent to the second original sinusoidal wave; the second graph is equivalent to the first original sinusoidal wave. The first graph of Figure 9b is equivalent to the third original sinusoidal wave. The magnitudes of the other three possible sinusoidal waves were all lower than 0.15, which were in fact artifacts due to white noise. The third graph of Figure 9b is a combination of six possible sinusoidal components, which is quite similar to the first source signal; the residue is in the fourth graph.
For comparison, Figure 10 shows the decomposition results through EMD of the first noised source signal (10 dB white noise); Figure 10a contains the first noised source signal and IMF1 to IMF4 from top to bottom; and Figure 10b contains IMF5 to IMF8 and the residue from top to bottom. Unlike our proposed SDDE approach, EMD is easily affected by white noise; no separate sinusoidal wave could be decomposed by EMD under the contaminated source signal.
In the second experiment, we only knew that hidden signal components were composed of three types of sinusoidal, square and triangular waves but did not know how many components existed. Since the dimension of true signal components was nine, we supposed that there were nine signal components and hence the dimension was a total of 27. In addition, the population size, n , was set to be 135.
Table 3 lists three experimental results. Since the error for each run was closer to the power of real noise, P r , the table shows that SDDE could determine hidden signal components even in the presence of noise in all three cases.
Likewise, due to a limited space, we only visualize the decomposed results of the second source signal—a combination of three square waves—in Figure 11. The first nine graphs of Figure 11 in turn show the sinusoidal, square and triangular waves, respectively. The second graph of Figure 11a is equivalent to the third original square wave; the third graph is 0; and the amplitudes of the other two graphs are lower than 0.4, which are in fact artifacts due to white noise.
The first graph of Figure 11b is equivalent to the first original square wave; the fourth graph shows the second original square wave; and the amplitudes of the other two graphs are lower than 0.15, which are in fact artifacts due to white noise. The amplitude of the first graph of Figure 11c is zero; the second graph shows a combination of nine possible square components, which is quite similar to the second source signal; a residue is shown in the third graph; and the fourth graph shows the second noised source signal.
For comparison, Figure 12 shows the decomposition results through EMD of the second noised source signal; Figure 12a contains the second noised source signal and IMF1 to IMF4 from top to bottom; Figure 12b contains IMF5 to IMF8 and the residue from top to bottom. Unlike our proposed SDDE approach, EMD is easily affected by white noise; no separate square wave can be decomposed by EMD under the contaminated source signal.

4.2.6. Signal Decomposition through Discrete Fourier Transform

To understand the differences between our proposed SDDE approach and DFT, Figure 13a shows the amplitude spectrum and phase spectrum through DFT with L = 1000 of the first source signal as well as the plot of the corresponding sinusoidal amplitudes; Figure 13b shows the corresponding plots for L = 1024. In the phase spectrums, the magnitudes are normalized by π for easy comparison. Since DFT has a frequency resolution of 1/L, L = 1000 can make our adopted three frequencies (0.01, 0.02 and 0.03) match the resolution, but L = 1024 cannot. In Figure 13, the points marked in red represent three main components closer to the original three frequencies; except for the previous three main components, the points marked in blue represent other main components that their amplitudes are 0.1 times larger than the maximum value of all amplitudes.
For L = 1000, Figure 13a obviously shows that there are only three main frequency components; they are exactly 0.01, 0.02 and 0.03; their corresponding phases are −0.5π, −0.375π and −0.25π; their corresponding sinusoidal amplitudes are 1, 2 and 3. Since the fundamental basis function of DFT for reconstruction is the Cosine function, their corresponding phases for the Sine function need to be right shifted by 0.5π; the shifted phases are 0, 0.125π and 0.25π, which are the same as the phases of the first source signal. Figure 14a shows from top to bottom the first source signal, the reconstructed signal with three main components and the corresponding signal error; Figure 14b shows the corresponding plots for the reconstructed signal with all main components.
Since our adopted three frequencies match the resolution, DFT can accurately decompose the first source signal into three main components; therefore, there is no difference between the reconstructed signal with three main components and that with all main components.
For L = 1024, Figure 13b shows that there are more than three main frequency components. We chose as our three main components three frequencies of 0.0098, 0.0195 and 0.0303, which are closer to three frequencies of the first source signal; their corresponding phases are −0.2282π, 0.1193π and −0.5349π for the Cosine function or 0.2718π, 0.6193π and −0.0349π for the Sine function, which are much different from the corresponding original phases (0, 0.125π and 0.25π); their corresponding sinusoidal amplitudes (0.9493, 1.3639 and 2.6559) are also a little different from the corresponding original amplitudes (1, 2 and 3).
Figure 15a shows the first source signal, the reconstructed signal with three main components and the corresponding signal error; Figure 15b shows the corresponding plots for the reconstructed signal with all main components.
Due to a finite frequency resolution, DFT cannot accurately determine three components. Therefore, the error between the first source signal and the reconstructed signal with three main components is much significant; even all main components are considered, the error is also significant. In contrast, our proposed SDDE approach can exactly determine the three main components.
In the presence of 10 dB white noise, Figure 16a shows the amplitude spectrum and phase spectrum of the first noised source signal through DFT with L = 1000 as well as the corresponding sinusoidal amplitudes plot; Figure 16b shows the corresponding plots for L = 1024.
For L = 1000 and in the presence of 10 dB white noise, Figure 16a shows that there are also only three main frequency components; they are exactly 0.01, 0.02 and 0.03; their corresponding phases are −0.5081π, −0.3734π and −0.2536π for the Cosine function, or −0.0081π, 0.1266π and 0.2464π for the Sine function, which are close to the phases of the first source signal; their corresponding sinusoidal amplitudes are 0.9884, 2.0958 and 3.0135, which are close to the amplitudes of the first source signal.
Figure 17a shows from top to bottom the first noised source signal, the reconstructed signal with three main components, the error between the reconstructed signal and the first source signal, and the error between the reconstructed signal and the first noised source signal. Figure 17b shows the corresponding plots for the reconstructed signal with all main components.
For L = 1024 and in the presence of 10 dB white noise, Figure 16b shows that there are more than three main frequency components. We chose as our three main components three frequencies of 0.0098, 0.0195 and 0.0303, which are closer to three frequencies of the first source signal; their corresponding phases are −0.2168π, 0.1303π and −0.5331π for the Cosine function or 0.2832π, 0.6303π and -0.0331π for the Sine function, which are much different from the corresponding original phases (0, 0.125π and 0.25π); their corresponding sinusoidal amplitudes (0.9500, 1.3752 and 2.6818) are also much different from the corresponding original amplitudes (1, 2 and 3).
Figure 18a shows from top to bottom the first noised source signal, the reconstructed signal with three main components, the error between the reconstructed signal and the first source signal, and the error between the reconstructed signal and the first noised source signal. Figure 18b shows the corresponding plots for the reconstructed signal with all main components.
Likewise, DFT cannot accurately determine three components due to a finite frequency resolution. Therefore, the error between the first source signal and the reconstructed signal with three main components is much significant; even all main components are considered, the error is also significant. In contrast, our proposed SDDE approach can determine the three main components at a tolerable level.

4.3. Discussion

Through EMD, the currently most widely-used data-driven decomposition tool, we can determine some regular patterns for signals with hidden periodic functions. The number of all possible signal components, IMFs, depends on the complexity of the source signal; it is normally a fraction of the length of the source signal. Hence, EMD has a potential problem of frequency resolution.
For smooth and non-smooth continuous signals with hidden periodic functions, EMD generally produces sinusoidal patterns with deformations in the boundaries because of end effects; in addition, it also produces some special periodic signal patterns with little deformation in the boundaries, which can be viewed as its unique basis functions, while some are a combination of several standard basis functions, such as sinusoids.
For non-continuous signals with hidden periodic functions, it also produces some special periodic signal patterns with little deformation in the boundaries, which fluctuate wildly depending on how many non-continuous components the source signal contains. For example, a combination of three square waves varies more wildly than a composite of three different waves because the former contains more non-continuous components than the latter.
Because of the inherent characteristics of EMD—i.e., the fact that EMD acts essentially as a dyadic filter bank [21,22,23]—it cannot further decompose some compound waves, even sinusoidal waves, into separate waves when these compound waves lie in the same dyadic filter bank. In contrast, our proposed SDDE approach can perfectly or almost perfectly decompose any compound waves into separate waves when the types of waves and the number of these types are known. Even in the presence of 10 dB white noise, our proposed SDDE approach still determine the main signal components when one or three types of waves are known but without knowing the number of these types.
On the other hand, based on the basis functions of the Sine function or Cosine function, DFT can also decompose any signal into a combination of the basis functions. The number of all possible signal components shown by its amplitude spectrum is the length of the source signal. Hence, DFT also has a potential problem of frequency resolution, but less severer than that of EMD.
For the first source signal, a combination of sinusoidal waves, if the frequencies of the original signal components match the resolution, then DFT can accurately determine three main components even in the presence of 10 dB white noise; but if the frequencies of the original signal components do not match the resolution, DFT decomposes the signal into more than three main components, causing a big error between the reconstructed signal and the source signal. As for the other three source signals or combinations of other periodic waves, DFT cannot decompose well even for signals without noise contamination. In contrast, our proposed SDDE approach perform well ever for signals with 10 dB white noise.
EMD and DFT both can quickly decompose any signal into possible signal components; through IMFs, EMD shows possible composites of single components of a signal; through amplitude spectrum, DFT shows possible single components of a signal. Even in the presence of 10 dB white noise and with a finite frequency resolution, EMD always shows possible patterns of a signal; DFT always shows possible components of a signal. The importance of an IMF is proportional to its magnitude; likewise, the importance of the sinusoidal wave at a certain frequency is proportional to the magnitude of the frequency spectrum at the frequency. The components with relative large magnitudes are called the main components.
Due to a considerably finite frequency resolution, EMD easily causes frequency mixing, i.e., an IMF consisting of different frequency components. Although DFT has a larger frequency resolution than EMD, its resolution is also finite. In contrast, our proposed SDDE approach has an infinite frequency resolution, and hence it has the opportunity to exactly decompose.
In addition, EMD and DFT both cannot do well in the signals of non-smooth waves; for EMD, how to explain the decomposed components is a big challenge; for DFT, the basis functions are always sinusoidal waves. In contrast, our proposed SDDE approach can adopt various periodic waves as our basis functions.
Contrary to EMD and DFT, it is necessary for our proposed SDDE approach to first decide what types of signal components are searched, and therefore how to determine the types of hidden signal components is a big challenge. Except for domain knowledge and individual experience, the main components of EMD and DFT can provide much information about types of signal components and their corresponding numbers.
In the current stage, our proposed SDDE approach focuses on three common periodic signals, but it can be easily applied to complex signals, such as chirp signals [31], or aperiodic signals, such as aperiodic rectangular pulses, or radar signals [32], or amplitude- and frequency-modulated (AM and FM) component signals [6], as long as their signal features can be described by formulas. When source signals are complicated and even contaminated by noise, it is usually necessary to adopt an advanced optimization algorithm combined with domain knowledge and the information provided by DFT and EMD.
DFT provides all possible single components, and therefore, DFT performs well in the signals with sinusoidal waves; EMD provides possible simple and complex combinations of all single components, and therefore EMD performs well in the signals with AM and FM waves. As EMD provides an auxiliary tool to help DFT obtain and explain some complex signal waveforms, our proposed SDDE approach provides an auxiliary tool to help DFT obtain more accurate information about signals with combinations of sinusoidal signals, and can also decompose signals with combinations of various periodic waves.

5. Conclusions

In this paper, a novel signal decomposition technique, called signal decomposition by differential evolution (SDDE), was proposed to decompose various periodic signals. At the pioneering stage, we only considered four common periodic signals, which were a combination of three sinusoidal waves, a combination of three square waves, a combination of three triangular waves and a combination of the three different above-mentioned waves. Obviously, the first source signal was continuous; the second was non-continuous and non-smooth; the third was continuous but non-smooth; and the final wave was non-continuous and non-smooth. First, their types and their corresponding numbers were known to us. Of 20 runs, our proposed SDDE approach could perfectly or almost perfectly decompose these four periodic source signals into separate waves. In contrast, empirical mode decomposition (EMD), the currently most widely-used data-driven decomposition tool, could not decompose these four periodic signals into separate waves. By nature, the outcomes of EMD, intrinsic mode functions, are combinations of amplitude-modulated modes and frequency-modulated modes; thus, EMD tends to view some periodic waves, single or composite, as its basis functions; and it also produces special non-periodic waves, generally transient and non-stationary, which sometimes are not easy to explain. In the second condition, one or three types of waves were known to us, but their corresponding numbers were unknown. Our proposed SDDE approach was still able to determine the main components even in the presence of 10 dB white noise.
Limited by a finite frequency resolution and sinusoidal basis functions, discrete Fourier transform (DFT) can decompose well only for signals with combinations of sinusoidal waves whose frequencies all match the frequency resolution. In contrast, our proposed SDDE approach can decompose well for combinations of various periodic waves. DFT and EMD well suit for roughly localizing the main components in order to provide much information about the source signal. With the help of DFT and EMD, our proposed SDDE approach can perform well in an efficient and effective way and detail more about how many main components indeed exist.
In the future, many works can be performed, including the use of source signals with more components, collecting various signal components—even complex components—into a collection of signal components as possible known types, and applying SDDE to some interesting source signals. In addition, we can adopt other advanced DE methods or other nature-inspired optimization algorithms, such as real-coded genetic algorithms or particle swarm optimization, to perform signal decomposition under newly developed objective functions. It can be expected that these objective functions may also be used as another set of benchmark functions, which are simple in theory but difficult in practice, to compare the performance among various optimization algorithms in the near future.

Author Contributions

Conceptualization, methodology, programing, writing—original draft preparation, Y.-C.C.; writing—review and editing, C.-C.C. All authors have read and agreed to the published version of the manuscript.

Funding

None.

Institutional Review Board Statement

Not applicable.

Informed Consent Statement

Not applicable.

Data Availability Statement

Data sharing not applicable.

Conflicts of Interest

The authors declare no conflict of interest.

References

  1. Strang, G.; Nguyen, T. Wavelets and Filter Banks; Wellesley-Cambridge Press: Massachusetts, MA, USA, 1996; pp. 85–86. [Google Scholar]
  2. Kamen, E.W.; Heck, B.S. Fundamentals of Signals and Systems Using the Web and MATLAB, 2nd ed.; Prentice Hall: New Jersey, NJ, USA, 2000. [Google Scholar]
  3. Burrus, C.S.; Gopinath, R.A.; Guo, H. Introduction to Wavelets and Wavelet Transform—A Primer; Prentice Hall: New Jersey, NJ, USA, 1998. [Google Scholar]
  4. Huang, N.E.; Shen, Z.; Long, S.R.; Wu, M.C.; Shih, H.H.; Zheng, Q.; Yen, N.-C.; Tung, C.C.; Liu, H.H. The empirical mode decomposition and Hilbert spectrum for nonlinear and non-stationary time series analysis. Proc. R. Soc. Lond. A 1998, 454, 903–995. [Google Scholar] [CrossRef]
  5. Rilling, G.; Flandrin, P.; Gonçalvès, P.G. On empirical mode decomposition and its algorithm. In Proceedings of the IEEE-EURASIP Workshop on Nonlinear Signal and Image Processing, Grado, Italy, 8–11 June 2003. [Google Scholar]
  6. Huang, N.E.; Shen, S.S.P. Hilbert-Huang Transform and Its Applications; World Scientific: New Jersey, NJ, USA, 2005. [Google Scholar]
  7. Kumar, S.; Panigrahy, D.; Sahu, P.K. Denoising of Electrocardiogram (ECG) signal by using empirical mode decomposition (EMD) with non-local mean (NLM) technique. Biocybern. Biomed. Eng. 2018, 38, 297–312. [Google Scholar] [CrossRef]
  8. Ji, N.; Ma, L.; Dong, H.; Zhang, X. EEG signals feature extraction based on DWT and EMD combined with approximate entropy. Brain Sci. 2019, 9, 201. [Google Scholar] [CrossRef] [PubMed] [Green Version]
  9. Chu, T.-Y.; Huang, W.-C. Application of empirical mode decomposition method to synthesize flow data: A case study of Hushan Reservoir in Taiwan. Water 2020, 12, 927. [Google Scholar] [CrossRef] [Green Version]
  10. Lv, Y.; Yuan, R.; Song, G. Multivariate empirical mode decomposition and its application to fault diagnosis of rolling bearing. Mech. Syst. Signal Process. 2016, 81, 219–234. [Google Scholar] [CrossRef]
  11. Wang, L.; Liu, Z.; Miao, Q.; Zhang, X. Complete ensemble local mean decomposition with adaptive noise and its application to fault diagnosis for rolling bearings. Mech. Syst. Signal Process. 2018, 106, 24–39. [Google Scholar] [CrossRef]
  12. Xiao, F.; Chen, G.S.; Zatar, W.; Hulsey, J.L. Signature extraction from the dynamic responses of a bridge subjected to a moving vehicle using complete ensemble empirical mode decomposition. J. Low Freq. Noise Vibn. Active Control 2019, 1–17. [Google Scholar] [CrossRef] [Green Version]
  13. Ge, H.; Chen, G.; Yu, H.; Chen, H.; An, F. Theoretical analysis of empirical mode decomposition. Symmetry 2018, 10, 623. [Google Scholar] [CrossRef] [Green Version]
  14. Wu, Z.; Huang, N.E. Ensemble empirical mode decomposition: A noise-assisted data analysis method. Adv. Adapt. Data Anal. 2009, 1, 1–41. [Google Scholar] [CrossRef]
  15. Chu, P.C.; Fan, C.; Huang, N.E. Compact empirical mode decomposition: An algorithm to reduce mode mixing, end effect, and detrend uncertainty. Adv. Adapt. Data Anal. 2012, 4, 1250017. [Google Scholar] [CrossRef] [Green Version]
  16. Zhang, D.; Cai, C.; Chen, S.; Ling, L. An improved genetic algorithm for optimizing ensemble empirical mode decomposition method. Syst. Sci. Control. Eng. 2019, 7, 53–63. [Google Scholar] [CrossRef] [Green Version]
  17. Rilling, G.; Flandrin, P.; Gonçalves, P.; Lilly, J.M. Bivariate empirical mode decomposition. IEEE Signal Process. Lett. 2007, 14, 936–939. [Google Scholar] [CrossRef] [Green Version]
  18. Rehman, N.; Mandic, D.P. Empirical mode decomposition for trivariate signals. IEEE Trans. Signal Process. 2010, 58, 1059–1068. [Google Scholar] [CrossRef]
  19. Rehman, N.; Mandic, D.P. Multivariate empirical mode decomposition. Proc. R. Soc. A 2010, 466, 1291–1302. [Google Scholar] [CrossRef]
  20. Singh, P.; Joshi, S.D.; Patney, R.K.; Saha, K. The Fourier decomposition method for nonlinear and non-stationary time series analysis. Proc. R. Soc. A 2017, 473, 20160871. [Google Scholar] [CrossRef] [PubMed] [Green Version]
  21. Flandrin, P.; Rilling, G.; Gonçalvès, P.G. Empirical mode decomposition as a filter bank. IEEE Signal Process. Lett. 2004, 11, 112–114. [Google Scholar] [CrossRef] [Green Version]
  22. Flandrin, P.; Gonçalvès, P.G. Empirical mode decompositions as a data-driven wavelet-like expansions. Int. J. Wavelets Multiresolut. Inf. Process. 2004, 2, 477–496. [Google Scholar] [CrossRef]
  23. Wu, Z.; Huang, N.E. A study of the characteristics of white noise using the empirical mode decomposition method. Proc. R. Soc. Lond. A 2004, 460, 1597–1611. [Google Scholar] [CrossRef]
  24. Storn, R.; Price, K. Differential Evolution—A Simple and Efficient Adaptive Scheme for Global Optimization over Continuous Spaces; Technical Report TR-95–012 1995; International Computer Science Institute: Berkeley, CA, USA, 1995. [Google Scholar]
  25. Price, K. Differential evolution: A fast and simple numerical optimizer. In Proceedings of the Biennial Conference of the North American Fuzzy Information Processing Society, Berkeley, CA, USA, 19–22 June 1996; pp. 524–527. [Google Scholar]
  26. Storn, R.; Price, K. Differential evolution—A simple and efficient heuristic strategy for global optimization over continuous spaces. J. Glob. Optim. 1997, 11, 341–359. [Google Scholar] [CrossRef]
  27. Yang, X.-S. Nature-Inspired Optimization Algorithms; Elsevier: New York, NY, USA, 2014. [Google Scholar]
  28. Chang, Y.-C. Differential evolution with control parameters selected from the previous performance. In Proceedings of the IEEE International Conference on Systems, Man, and Cybernetics, Miyazaki, Japan, 7–10 October 2018; pp. 2285–2288. [Google Scholar]
  29. Chang, Y.-C. Parameter selection of differential evolution by another differential evolution algorithm. In Proceedings of the IEEE International Conference on Systems, Man, and Cybernetics, Bari, Italy, 6–9 October 2019; pp. 2510–2515. [Google Scholar]
  30. Lundahl, T.; Ohley, W.J.; Kay, S.M.; Siffert, R. Fractional Brownian motion: A maximum likelihood estimator and its application to image texture. IEEE Trans. Med. Imaging 1986, 5, 152–161. [Google Scholar] [CrossRef] [PubMed]
  31. Kaiser, G. A Friendly Guide to Wavelets; Birkhäuser: Boston, MA, USA, 1994. [Google Scholar]
  32. Aubry, A.; Bazzoni, A.; Carotenuto, V.; De Maio, A.; Failla, P. Cumulants-based radar specific emitter identification. In Proceedings of the IEEE International Workshop on Information Forensics and Security, Iguacu Falls, Brazil, 29 November–2 December 2011. [Google Scholar]
Figure 1. The first source signal: (a) the source signal and its three individual components; (b) the searched components and their combination (top).
Figure 1. The first source signal: (a) the source signal and its three individual components; (b) the searched components and their combination (top).
Sensors 21 00588 g001
Figure 2. Empirical mode decomposition of the first source signal: the source signal, IMF1–IMF4 and residue from top to bottom.
Figure 2. Empirical mode decomposition of the first source signal: the source signal, IMF1–IMF4 and residue from top to bottom.
Sensors 21 00588 g002
Figure 3. The second source signal: (a) the source signal and its three individual components; (b) the searched components and their combination (top).
Figure 3. The second source signal: (a) the source signal and its three individual components; (b) the searched components and their combination (top).
Sensors 21 00588 g003
Figure 4. Empirical mode decomposition of the second source signal: (a) the source signal, IMF1–IMF3 from top to bottom; (b) IMF4–IMF6 and residue from top to bottom.
Figure 4. Empirical mode decomposition of the second source signal: (a) the source signal, IMF1–IMF3 from top to bottom; (b) IMF4–IMF6 and residue from top to bottom.
Sensors 21 00588 g004
Figure 5. The third source signal: (a) the source signal and its three individual components; (b) the searched components and their combination (top).
Figure 5. The third source signal: (a) the source signal and its three individual components; (b) the searched components and their combination (top).
Sensors 21 00588 g005
Figure 6. Empirical mode decomposition of the third source signal: the source signal, IMF1–IMF4 and residue from top to bottom.
Figure 6. Empirical mode decomposition of the third source signal: the source signal, IMF1–IMF4 and residue from top to bottom.
Sensors 21 00588 g006
Figure 7. The fourth source signal: (a) the source signal and its three individual components; (b) the searched components and their combination (top).
Figure 7. The fourth source signal: (a) the source signal and its three individual components; (b) the searched components and their combination (top).
Sensors 21 00588 g007
Figure 8. Empirical mode decomposition of the fourth source signal: (a) the source signal, IMF1–IMF3 from top to bottom; (b) IMF4–IMF6 and residue from top to bottom.
Figure 8. Empirical mode decomposition of the fourth source signal: (a) the source signal, IMF1–IMF3 from top to bottom; (b) IMF4–IMF6 and residue from top to bottom.
Sensors 21 00588 g008
Figure 9. The searched components of the first noised source signal (10 dB white noise): (a) four possible sinusoidal signal components; (b) two possible sinusoidal signal components, a combination of six possible sinusoidal components and the residue from top to bottom.
Figure 9. The searched components of the first noised source signal (10 dB white noise): (a) four possible sinusoidal signal components; (b) two possible sinusoidal signal components, a combination of six possible sinusoidal components and the residue from top to bottom.
Sensors 21 00588 g009
Figure 10. Empirical mode decomposition of the first noised source signal (10 dB white noise): (a) the source signal with noise, IMF1–IMF4 from top to bottom; (b) IMF5–IMF8 and residue from top to bottom.
Figure 10. Empirical mode decomposition of the first noised source signal (10 dB white noise): (a) the source signal with noise, IMF1–IMF4 from top to bottom; (b) IMF5–IMF8 and residue from top to bottom.
Sensors 21 00588 g010
Figure 11. The searched components of the second noised source signal (10 dB white noise): (a) possible sinusoidal, square, triangular and sinusoidal signal components from top to bottom; (b) possible square, triangular, sinusoidal and square signal components from top to bottom; (c) a possible triangular signal component, a combination of nine possible components, a residue and the second noised source signal from top to bottom.
Figure 11. The searched components of the second noised source signal (10 dB white noise): (a) possible sinusoidal, square, triangular and sinusoidal signal components from top to bottom; (b) possible square, triangular, sinusoidal and square signal components from top to bottom; (c) a possible triangular signal component, a combination of nine possible components, a residue and the second noised source signal from top to bottom.
Sensors 21 00588 g011
Figure 12. Empirical mode decomposition of the second source signal with 10 dB white noise: (a) the source signal with noise, IMF1–IMF4 from top to bottom; (b) IMF5–IMF8 and residue from top to bottom.
Figure 12. Empirical mode decomposition of the second source signal with 10 dB white noise: (a) the source signal with noise, IMF1–IMF4 from top to bottom; (b) IMF5–IMF8 and residue from top to bottom.
Sensors 21 00588 g012
Figure 13. Amplitude spectrum and phase spectrum of the first source signal through DFT as well as the corresponding sinusoidal amplitude plot (from top to bottom): (a) L = 1000; (b) L = 1024.
Figure 13. Amplitude spectrum and phase spectrum of the first source signal through DFT as well as the corresponding sinusoidal amplitude plot (from top to bottom): (a) L = 1000; (b) L = 1024.
Sensors 21 00588 g013
Figure 14. The source signal, the reconstructed signal and the corresponding signal error (from top to bottom) for L = 1000: (a) three main components; (b) all main components.
Figure 14. The source signal, the reconstructed signal and the corresponding signal error (from top to bottom) for L = 1000: (a) three main components; (b) all main components.
Sensors 21 00588 g014
Figure 15. The source signal, the reconstructed signal and the corresponding signal error (from top to bottom) for L = 1024: (a) three main components; (b) all main components.
Figure 15. The source signal, the reconstructed signal and the corresponding signal error (from top to bottom) for L = 1024: (a) three main components; (b) all main components.
Sensors 21 00588 g015
Figure 16. Amplitude spectrum and phase spectrum of the first source signal contaminated by 10 dB white noise through DFT as well as the corresponding sinusoidal amplitude plot (from top to bottom): (a) L = 1000; (b) L = 1024.
Figure 16. Amplitude spectrum and phase spectrum of the first source signal contaminated by 10 dB white noise through DFT as well as the corresponding sinusoidal amplitude plot (from top to bottom): (a) L = 1000; (b) L = 1024.
Sensors 21 00588 g016
Figure 17. The first noised source signal (10 dB white noise), the reconstructed signal, the error between the reconstructed signal and the first source signal, and the error between the reconstructed signal and the first noised source signal (from top to bottom) for L = 1000: (a) three main components; (b) all main components.
Figure 17. The first noised source signal (10 dB white noise), the reconstructed signal, the error between the reconstructed signal and the first source signal, and the error between the reconstructed signal and the first noised source signal (from top to bottom) for L = 1000: (a) three main components; (b) all main components.
Sensors 21 00588 g017
Figure 18. The first noised source signal (10 dB white noise), the reconstructed signal, the error between the reconstructed signal and the first source signal, and the error between the reconstructed signal and the first noised source signal (from top to bottom) for L = 1024: (a) three main components; (b) all main components.
Figure 18. The first noised source signal (10 dB white noise), the reconstructed signal, the error between the reconstructed signal and the first source signal, and the error between the reconstructed signal and the first noised source signal (from top to bottom) for L = 1024: (a) three main components; (b) all main components.
Sensors 21 00588 g018
Table 1. Performance of four source signals.
Table 1. Performance of four source signals.
RunsSinusoidalSquareTriangularComposite
17.58E−020.00E+002.75E−016.80E−01
23.51E−320.00E+001.65E−019.10E−01
33.51E−320.00E+002.02E−030.00E+00
43.51E−320.00E+008.58E−021.69E−30
53.51E−320.00E+003.07E−025.09E−01
65.10E−220.00E+001.37E−031.24E+00
70.00E+000.00E+001.09E−018.09E−01
80.00E+000.00E+009.62E−011.12E−01
94.99E−010.00E+004.31E−314.84E−01
103.51E−320.00E+001.31E−031.40E+00
113.51E−320.00E+004.16E−015.09E−01
121.67E−010.00E+003.59E−016.31E−04
134.99E−010.00E+005.26E−037.97E−03
145.55E−320.00E+004.16E−011.95E−15
155.55E−320.00E+006.18E−011.13E+00
163.51E−320.00E+004.31E−016.80E−01
171.65E−300.00E+004.16E−017.78E−01
183.33E−020.00E+008.22E−021.24E+00
195.29E−030.00E+004.16E−011.49E+00
205.55E−320.00E+002.75E−018.34E−24
Mean6.39E−020.00E+002.53E−015.99E−01
Std.1.54E−010.00E+002.55E−015.19E−01
Table 2. Performance of four noised source signals (10 dB white noise) with known type and unknown number.
Table 2. Performance of four noised source signals (10 dB white noise) with known type and unknown number.
RunsSinusoidalSquareTriangularComposite
16.41E−011.56E+001.01E+007.54E−01
26.44E−011.55E+009.98E−017.65E−01
36.33E−011.52E+009.98E−017.81E−01
46.32E−011.54E+009.94E−017.56E−01
56.41E−011.55E+001.00E+008.77E−01
66.29E−011.57E+009.98E−017.70E−01
76.44E−011.56E+009.97E−018.67E−01
86.37E−011.53E+001.01E+007.44E−01
96.40E−011.54E+001.00E+009.34E−01
106.30E−011.53E+001.01E+009.30E−01
116.43E−011.52E+001.01E+007.67E−01
126.38E−011.53E+001.00E+007.62E−01
136.35E−011.55E+001.01E+007.54E−01
146.30E−011.54E+001.01E+008.23E−01
156.37E−011.55E+001.00E+008.74E−01
166.45E−011.56E+001.01E+007.72E−01
176.45E−011.55E+009.98E−017.67E−01
186.41E−011.54E+009.98E−019.11E−01
196.33E−011.54E+001.00E+007.50E−01
206.44E−011.55E+001.00E+008.46E−01
Mean6.38E−011.54E+001.00E+008.10E−01
Std.5.47E−031.24E−024.97E−036.58E−02
Ps7.09E+001.51E+011.02E+017.78E+00
Pn7.09E−011.51E+001.02E+007.78E−01
Pr6.58E−011.61E+001.03E+007.70E−01
Table 3. Performance of three noised source signals (10 dB white noise) with three known types and unknown number.
Table 3. Performance of three noised source signals (10 dB white noise) with three known types and unknown number.
RunsSinusoidalSquareTriangular
17.18E−011.32E+009.74E−01
27.18E−011.62E+001.00E+00
37.28E−011.47E+001.00E+00
47.53E−011.32E+001.00E+00
57.33E−011.43E+001.02E+00
67.28E−011.34E+001.02E+00
77.38E−011.32E+009.81E−01
87.20E−011.33E+009.97E−01
97.35E−011.39E+009.80E−01
107.36E−011.38E+009.80E−01
117.32E−011.39E+009.77E−01
127.31E−011.55E+009.81E−01
137.33E−011.30E+001.01E+00
146.96E−011.38E+009.63E−01
157.44E−011.61E+009.85E−01
167.71E−011.31E+001.01E+00
177.25E−011.47E+009.92E−01
187.39E−011.50E+009.93E−01
197.37E−011.50E+009.88E−01
207.28E−011.75E+001.00E+00
Mean7.32E−011.43E+009.93E−01
Std.1.48E−021.23E−011.58E−02
Ps7.09E+001.51E+011.02E+01
Pn7.09E−011.51E+001.02E+00
Pr7.55E−011.33E+001.03E+00
Publisher’s Note: MDPI stays neutral with regard to jurisdictional claims in published maps and institutional affiliations.

Share and Cite

MDPI and ACS Style

Chang, Y.-C.; Chang, C.-C. Using an Optimization Algorithm to Detect Hidden Waveforms of Signals. Sensors 2021, 21, 588. https://doi.org/10.3390/s21020588

AMA Style

Chang Y-C, Chang C-C. Using an Optimization Algorithm to Detect Hidden Waveforms of Signals. Sensors. 2021; 21(2):588. https://doi.org/10.3390/s21020588

Chicago/Turabian Style

Chang, Yen-Ching, and Chin-Chen Chang. 2021. "Using an Optimization Algorithm to Detect Hidden Waveforms of Signals" Sensors 21, no. 2: 588. https://doi.org/10.3390/s21020588

Note that from the first issue of 2016, this journal uses article numbers instead of page numbers. See further details here.

Article Metrics

Back to TopTop