With the purpose of performing a simple and real-time time-lag analysis over a pair of VBR signals, these signals are segmented into the form of sequential windows of size
N. For every window pair,
w, a zero-mean normalization is applied to both signals and a Partial Cross-Correntropy estimator (PCC) is computed with Equation (
4), by performing a sweep in the input value
m,
. It is presumed that the window size is at least
times higher than the expected drift value. Therefore, all windowed pairs should reveal the same global maximum as the correct time reference candidate. However, because of the presence of uncorrelated segments in VBR signals, not all of the PCC reveals the global maximum as the correct time reference. Some PCC reveal the correct time reference as a local maximum. An example of a PCC that yields an inlier and an outlier time reference candidate are shown in
Figure 2.
Additionally, uncorrelated window pairs can create untrustworthy peak values when
m is near to
. This happens mainly because of the shape of the centralized cross-correntropy estimator. Because Equation (
4) is applied in a finite sample set of size
N, the number of analyzed samples
drops as
m increases. This results in a poor estimation quality when
m gets closer to
N, and there is the possibility of a crescent function in the estimation results. This is also one of the reasons to sweep the
m value from
to
. Therefore, candidates extracted near to
within a predefined range
f are removed from the possible time reference set.
Figure 3 demonstrate the results after an initial filtering of PCC candidates for
. In this example, the correct delay between the VBR signal pair is five samples.
Although in the example shown in
Figure 3 the majority of candidates belong to the correct offset value, in many cases, a number of outliers and their distribution mask the correct time offset from the candidates. Thus, an efficient algorithm is necessary to deal with the presence of outliers and to perform a correct time offset classification.
3.1. Classification Algorithm
The algorithm proposed to perform a time offset classification is based on sample consensus, and the list of procedures is shown in Algorithm 1.
Algorithm 1 Classification Algorithm |
Strip signal and into overlapped segments and of size N and overlap ratio . Compute PCCs for each segmented pair and . Extract a Number of Candidates (N.C.) candidates’ offset (descent local maxima) from each PCC. For each candidate offset, evaluate the number of inliers. An inlier is any other candidate offset with its absolute distance within a predefined range . Choose the candidate offset with the highest number of inliers as the correct time offset between and signal pairs. The number of inliers divided by the total number of candidates gives the confidence level
|
The basic idea of the proposed algorithm to deal with the presence of outliers is to choose the most probable candidate offset within a predefined range. As can be seen in
Figure 3, the correct time reference value forms a distribution density with a very small deviation while the outliers belong to a larger distribution, even if analyzed individually.
Figure 4 demonstrates the signal flow of the proposed algorithm. The first and second lines show a VBR signal pair. Each of the other lines illustrate the signals obtained from each step of the algorithm. The VBR signals are chopped in sequential windows of size
and overlap ratios
; in
Figure 4 the overlapped windows are between the vertical red lines. The third line shows the calculated PCC for each window. Then, a Number of Candidates (N.C.) in the form of local peak values are extracted for each PCC and are shown in the fourth line. The local peak values are cumulative, as they help increase the statistical precision and confidence. The last line shows the results of the classification algorithm in the form of a bar graph. Each bar in the plots from the fourth line represents the counting of each element within a predefined range. In the example of
Figure 4, the parameters are configured as
,
,
,
, and
.
In the example of
Figure 4, the proposed algorithm identified the correct time offset between the VBR signal pair at the second window analyzed. However, not all the cases converged as fast as that demonstrated in the example above. Although the proposed algorithm is very simple, there is a series of parameters that can be configured to maximize its performance. These parameters are described below and the experiments concerning them are described in the next section.
3.1.1. Window Size N and Overlap Ratio
Considering the real-time requirements of the proposed synchronization model and the usage of the correntropy time-lag analysis function, the windowing strategy is chosen to allow the application of the correntropy time-lag estimator in parallel segment pairs of size N. These segments are extracted from each VBR signal sequentially with a degree of sample overlap
However, this introduces some drawbacks. The proposed method cannot extract a valid time reference from a pair of VBR signals if the absolute difference between them is greater than two-thirds of the window size. There are two factors that contribute to this behavior: the first is related to the m parameter of the cross-correntropy time-lag estimator, which assumes values limited between and ; the second concerns the parallel structure of the window analysis, as for both the VBR signals analyzed, the window pairs started and ended in the same sample.
We consider the following example, where the window size N is 900 samples. The maximum absolute offset value cannot exceed 600 samples. For the same window size and with an overlap ratio of , a new window starts at every 300 samples and extends for 900 samples. Additionally, for and , a new window starts at every 18 samples.
From a probabilistic viewpoint, there is an intrinsic relationship between the N and parameters. To increase the probability of a correct time reference classification, we should only select the windows without any uncorrelated segments. However, there is no simple way to perform this selection.
Our approach is to use sequential windows of size N, shifted by samples from the previous sample. For a small window size, the parameter contributes less to the probability of choosing a window free of uncorrelated segments, as the number of shifted samples is decreased when N is small. For bigger window sizes, the uncorrelated segments contribute less to an incorrect analysis if the majority of samples in the window are error-free. In this case, the parameter increases the number of properly analyzed windows, creating more trustworthy values.
For online synchronization systems, the window size and the overlapping ratio play an important role in the system response time to achieve synchronization. For each PCC, a time of samples is necessary to build a set of possible synchronization candidates. This means that the overlapping ratio parameter can decrease the system response time spent achieving synchronization for long window sizes.
Regarding computational complexity, this window strategy operates on a space for correntropy times the parameter, resulting in a space computational complexity.
3.1.2. Inlier Range
In some cases, the majority of candidates extracted from the PCC of the whole signal may reveal the correct timing offset. However, the candidate distribution shows that the offset value lies within a range of values instead of at a single point. Therefore, the calculation of the time reference offset must consider values within a certain predefined range.
The idea behind the parameter is to separate the inlier candidates from the outliers by their distribution width. For now, this value is computed empirically, and an experiment regarding the parameter is described in the next Section.
3.1.3. Confidence Level and Number of Candidates
The confidence level is an output parameter that measures the certainty in the result of a time-lag analysis. Furthermore, it is a measure of the precision of the classification, and it presents a reference value for considering whether the classification is reliable. The parameter is intrinsically related to the number of candidates extracted as the descent local maximum of each PCC. The use of a descent local maximum to capture a possible number of candidates decreases the probability of one candidate to be the correct offset value by ; in general, this decreases the confidence level of the time reference classifications.