Particle-Swarm-Optimization-EnhancedRadial-Basis-Function-Kernel-BasedAdaptive Filtering Applied to Maritime Data

: The real-life signals captured by different measurement systems (such as modern maritime transport characterized by challenging and varying operating conditions) are often subject to various types of noise and other external factors in the data collection and transmission processes. Therefore, the ﬁltering algorithms are required to reduce the noise level in measured signals, thus enabling more efﬁcient extraction of useful information. This paper proposes a locally-adaptive ﬁltering algorithm based on the radial basis function (RBF) kernel smoother with variable width. The kernel width is calculated using the asymmetrical combined-window relative intersection of conﬁdence intervals (RICI) algorithm, whose parameters are adjusted by applying the particle swarm optimization (PSO) based procedure. The proposed RBF-RICI algorithm’s ﬁltering performances are analyzed on several simulated, synthetic noisy signals, showing its efﬁciency in noise suppression and ﬁltering error reduction. Moreover, compared to the competing ﬁltering algorithms, the proposed algorithm provides better or competitive ﬁltering performance in most considered test cases. Finally, the proposed algorithm is applied to the noisy measured maritime data, proving to be a possible solution for a successful practical application in data ﬁltering in maritime transport and other sectors.


Introduction
In today's world, due to the advances in digital technologies, vast amounts of data are continuously acquired by different measurement systems, covering all fields of human activities. These data are then used as input to various algorithms and analysis procedures. However, the obtained real-life signals are often corrupted by various types of noise, which occur due to numerous environmental factors and to the data acquisition and transmission processes themselves. Therefore, prior to the further exploitation and analysis of such data, it needs to be processed by filtering algorithms to reduce the noise, thus enabling more efficient reconstruction of the original information content [1][2][3][4][5][6][7].
The modern maritime transport sector is an example of a complex system consisting of different measurement subsystems and communication systems used to transmit collected information. Moreover, the future development of autonomous shipping increases the amount of data acquired during the ship operation, thus placing additional demand on data quality. Maritime applications imply operation under difficult and rapidly changing environmental conditions, which, accompanied by the reduced availability of reliable communication channels, leads to high noise levels in the acquired data. Therefore, the implementation of the existing and the development of the application-specific filtering algorithms for the reconstruction of useful information from the noise-corrupted signals is an important field of scientific research within maritime and transport engineering [8]. The research has been focused on different areas of application, including underwater signal processing [9][10][11][12], radar signal processing [13,14], underwater image processing [15][16][17], optical signal processing [18], and other applications [19][20][21].
The parametric filtering algorithms require knowledge of the underlying signal and noise characteristics. In many practical applications, such information is not available, and nonparametric filtering algorithms are used [22]. The main task in these algorithms includes applying a method for finding the optimal filter width that controls the level of signal smoothing. The nonparametric methods may be broadly classified into the plug-in methods and the quality-of-fit methods (such as, for example, cross-validation) [22]. The plug-in methods are computationally demanding due to complex formulae for calculating the optimal filter width based on the estimation bias and variance. On the other hand, the data-driven cross-validation method does not require bias estimation. The optimal filter width minimizes the estimation mean squared error (MSE) resulting in an optimal bias-variance trade-off [22].
An easy-to-implement and locally adaptive filtering algorithm is based on the intersection of confidence intervals (ICI) rule [23] combined with the local polynomial approximation (LPA) [24,25]. The LPA-ICI algorithm requires only the estimation of the signal and noise variance. The ICI rule for the optimal filter width selection was upgraded in [26] and named the relative intersection of confidence intervals (RICI) rule, keeping the good properties of the ICI rule, and at the same time improving the filtering performance in terms of MSE and reducing method's sensitivity to suboptimal parameter selection.
As it may be expected, the filtering performances of the LPA-ICI and the LPA-RICI algorithms are affected by selected parameters' values. Different data-driven techniques have been analyzed for the selection of the ICI rule's parameter [24]. However, the RICI algorithm requires a simultaneous adjustment of two parameters, which has not been investigated in depth in the literature. Therefore, the approach based on the grid search in parameter space is generally used when applying the RICI algorithm. The time-consuming nature of this procedure places a demand for a faster solution. The study in [27] proposed a simulation-based method for the selection of the appropriate values of the RICI rule's parameters with respect to the obtained MSE. However, the provided empirical formula defines regions of near-optimal parameter values and is obtained by analyzing a limited number of signal classes. Moreover, the formula for the calculation of one parameter's value requires the other parameter's value as an input, thus leaving a problem of its proper selection. Therefore, a more general approach, which will guarantee the optimal parameters' values, is required.
In this paper, we propose an adaptive filtering algorithm based on the radial basis function (RBF) kernel smoother with variable width. The kernel width is calculated using the asymmetrical combined-window RICI algorithm. This data-driven, locally adaptive algorithm requires only the noise variance estimation (and does not require the bias estimation nor any information about the underlying signal and noise). In order to reduce the time required by the procedure based on a grid search in the parameter space, the RICI algorithm is upgraded by the particle swarm optimization (PSO) based procedure. We analyze the proposed RBF-RICI algorithm's filtering performance by applying it to several synthetic, simulated noisy signals. The paper elaborates on the RBF-RICI's efficiency in noise suppression and the filtering error reduction. Moreover, we compare the proposed algorithm to the competing filtering algorithms and show that it provides better or competitive filtering performance in most considered test cases. Additionally, we have performed a comparative analysis of several evolutionary metaheuristic optimization algorithms applied to selecting the RBF-RICI algorithm's parameters, including the genetic algorithm (GA) and three algorithms based on the PSO. Finally, the proposed RBF-RICI algorithm is applied to the noisy, real-life measured maritime data, proving to be a potential solution for a successful practical application in data filtering in the maritime and other similar sectors.
The rest of this paper is structured as follows. The proposed filtering algorithm and optimization procedures are described in Section 2. The obtained results are thoroughly presented and discussed in Section 3. Finally, the conclusions are summarized in Section 4.

Materials and Methods
We consider the noisy observations of signal y(k): where x(k) is the original signal, and n(k) is the additive white Gaussian noise. The noisy signal y(k) has to be filtered in order to obtain the estimate of the original signalx(k) with the minimum estimation error. In this work, we propose an adaptive filtering algorithm combining the RBF kernel smoother with the asymmetrical combinedwindow RICI procedure whose parameters are adjusted utilizing the PSO algorithm.

The RBF-Kernel-Based Adaptive Filtering
The kernel smoother estimates the signal sample value as the weighted average of the neighboring observed sample values. The weights assigned to the specific samples are determined by the kernel type. Nonrectangular kernels assign higher weights to the samples closer and smaller weights to the samples farther away from the considered one [28].
The RBF or Gaussian kernel K(k 0 , k, h) is defined as: where k 0 is the sample of interest, k is the neighboring signal sample, h is the kernel width, and σ is the RBF kernel scale which is referred to as standard deviation when considering the Gaussian probability density function. The Nadaraya-Watson kernel-weighted estimate [29] of the signal value at the considered sample is given by: The kernel width h is a parameter that controls the estimation accuracy and the smoothness of the estimated signal. The kernels with larger widths include more samples in the estimation procedure, causing the decreased estimation variance and, at the same time, increased estimation bias. On the other hand, smaller kernel widths lead to the increase in the estimation variance and, simultaneously, the decrease in the estimation bias due to the reduced number of samples taken into the estimation procedure [24].
Therefore, the selection of the proper kernel width determines the efficiency of the applied filtering algorithm. As opposed to the constant kernel width used for the entire duration of the signal, the varying kernel width enables the adaptation to the local signal features. The kernel width providing the optimal trade-off between estimation bias and variance is here calculated using the adaptive RICI algorithm.
The absolute value of error ε n (k, h) obtained by the kernel-smoother-based estimation procedure is defined as: wherex n (k, h) represents the signal sample value estimated using n samples in its vicinity by applying the kernel with varying width h(k). The mean squared estimation error, MSE(k, h), may be defined, with respect to the estimation bias b n (k, h) and the estimation variance σ 2 n (k, h), as [30,31]: The crucial task of the adaptive filtering procedure includes selecting the kernel width h o (k) that minimizes MSE(k, h), thus providing the optimal bias-variance trade-off [30,31]: Note that the original signal values x(k) are with the confidence p contained within confidence intervals δ n (k, h) [24,31]: where δ l,n (k, h) and δ u,n (k, h) are the lower and the upper confidence limits, respectively. The confidence limits are defined using the ICI threshold parameter Γ representing the critical value of the confidence interval: The initial stage of the RICI algorithm is identical to the ICI rule [23], which provides a set of N growing kernel widths H = {h 1 , h 2 , . . . , h N }, h 1 < h 2 < · · · < h N , and the corresponding confidence intervals δ n (k, h) for each signal sample k [24,25,31].
The ICI rule algorithm tracks the intersection of the confidence intervals and provides the largest kernel width h ICI (k) for which the intersection still exists [24,25,31]: This condition is met if the inequality δ l,n (k, h) ≤ δ u,n (k, h) is satisfied, where δ l,n (k, h) is the largest lower and δ u,n (k, h) is the smallest upper confidence limit [24,31].
In this work, we applied the asymmetrical combined-window RICI approach, where the above-described procedure is implemented independently to the left and right side of the considered sample k, resulting in two sets of confidence intervals δ l (k, h) and δ r (k, h) for each signal sample k [27].
The algorithm tracks the intersection of the currently calculated nth confidence interval with the intersection of all previous n − 1 confidence intervals on the each side of the considered kth sample independently. This results in h l,ICI (k) and h r,ICI (k) as the largest kernel widths satisfying (10) for the left and right side, respectively [27]. Finally, the candidate for the optimal width of the asymmetrical kernel is obtained by combining h l,ICI (k) and h r,ICI (k): The ICI algorithm's performance is highly sensitive to the selection of the optimal value of the threshold parameter Γ, as too small values result in signal undersmoothing, and too large values cause signal oversmoothing [24,30].
In order to make the ICI rule stage more robust to suboptimal Γ values, the second stage of the algorithm includes the RICI rule upgrade, which applies the additional criterion for the adaptive kernel width selection to the kernel width candidates obtained by the ICI rule. The RICI rule improves the estimation accuracy of the ICI rule for the same values of the threshold parameter Γ [26,27,32,33].
The RICI criterion λ n (k, h) tracks the relative amount of confidence intervals overlapping by calculating the ratio of the intersection of the confidence intervals' width and the current confidence interval's width [26,27]: The calculated RICI ratio λ n (k, h) is compared to the preset RICI threshold value (0 ≤ R c ≤ 1) [27]: Similar to the ICI rule stage, the RICI rule is also applied independently to the both sides of the considered samples, resulting in h l,RICI (k) and h r,RICI (k) which are the largest kernel widths satisfying (10) and (13) for the left and right side, respectively [27]. Finally, the width of the asymmetrical kernel h RICI (k) obtained by the RICI algorithm is calculated as:

The Evolutionary Metaheuristic Optimization Algorithms
In real-world engineering applications, optimization problems are often nonlinear, NP-hard (nondeterministic polynomial time-hard), and nondifferentiable. The traditional techniques require a mathematical formulation of the considered problem, which is often not possible. However, the evolutionary metaheuristic optimization algorithms have been proved to be powerful tools for nonlinear optimization problems. They overcome the limitations of the traditional techniques for nondifferentiable, noncontinuous, and nonconvex objective functions. In this paper, we use and compare several evolutionary metaheuristic optimization algorithms based on the PSO and GA. The generalized pseudocodes for Algorithms A1-A4 are provided in Appendix A.

Traditional PSO Algorithm
The PSO is a population based iterative algorithm, where all particles are gathered in one population, called the swarm [34][35][36]. Each particle represents the potential solution of the optimization problem in the solution space. In each iteration, particles adjust its flying trajectories according to personal and global experiences. Let us denote parameter s as the number of particles, or the population size, and d as the dimension of the solution space. d-dimensional solution space is defined with problem variables that need to be optimized [34,35]. Particles move in the solution space by updating their velocity vector, which is for the i-th particle (i ∈ [1, s]) defined as v i = (v i,1 , v i,2 , . . . , v i,d ), and position vector of the i-th particle, x i = (x i,1 , x i,2 , . . . , x i,d ), according to the following equations: where r 1 and r 2 are random numbers sampled from the range 0 to 1, w is the inertia weight, c 1 and c 2 denote the acceleration coefficients which control the particle movement toward the personal and the global best, p i,d and p g,d , respectively [34,35]. Apart from limiting particle positions for the constrained problem, the value of each velocity v i,d can be limited also to range [v min,d , v max,d ] in order to reduce the likelihood of particles moving too fast in smaller solution spaces. A suitable selection of the inertia factor and the acceleration coefficients provides a balance between global and local exploration and exploitation. Further research has claimed that improved PSO performance could be gained if the inertia weight was chosen as a linearly decreasing number rather than a constant number over all iterations. The idea was that PSO search should start with high inertia weight for strong global exploration while, with linearly decreasing, in later iterations emphasize finer local explorations [34,35]. The w used in our simulations is given as: where parameters it and MaxIt stand for the current and maximum PSO iteration, respectively [34,35].

Enhanced Partial Search Particle Swarm Optimization (EPS-PSO) Algorithm
In some cases, the standard PSO might lead to a stagnant state, meaning that the global best position cannot be improved over iterations. To avoid PSO's convergence to a local optimum, a supplementary search direction was obtained by implementing an additional population that takes up the global search assignment. The PSO variation is termed as the Enhanced Partial Search Particle Swarm Optimization (EPS-PSO) [35].
At the start of the EPS-PSO algorithm, the entire population is divided equally into two subswarms, namely the traditional subswarm and the cosearch swarm, respectively. The main difference between those two swarms is that the cosearch swarm is periodically reinitialized every t g iterations, where t g is called the reinitialization period [35]. The reinitialization of the particle positions is performed uniformly in the search space. However, there is an exception; if the current global best position of the cosearch, denoted as p CO−g,d , outperforms the global best position of the traditional swarm, denoted as p T−g,d , then the reinitialization of the cosearch swarm is called off. Basically, two subswarms work independently. The only cooperation between the subswarms is when the cosearch swarm exploits better outcomes than the traditional swarm. In that case, the velocity vector of the traditional swarm will update its value based on the p CO−g,d , instead of the p T−g,d [35].

Multiswarm Particle Swarm Optimization (MSPSO)
The Multiswarm Particle Swarm Optimization (MSPSO) algorithm is another variant PSO algorithm that also introduces additional swarms [37]. Therefore, the total population depends not only on the number of particles but also on the number of swarms, denoted as nSwarm. The number of swarms determines the degree of communication between the swarms. The studies have shown that the algorithm performance can be improved by involving more swarms in the process [35,37]. However, introducing more swarms also increases the number of evaluations. For our optimizations, we have divided the same population size of the previous PSO variants into multiple swarms.

Genetic Algorithm (GA)
The GA is a stochastic algorithm invented to mimic some procedures in the natural evolution [36]. In GA, new particles, also called offsprings, are formed by combining two particles from current generations, called parents, using crossover and mutation operators. A new generation is formed by keeping the best performing particles from the parents and offsprings. The crossover and mutation operators are part of genetic operators based on the Darwinian principle of survival of the best performing particle in the population. In the GA used in this paper, the numbers of offspring and mutants, n c and n m respectively, are controlled with parameters p c and p m [36].

Results and Discussion
In order to investigate the efficiency of the proposed RBF-RICI adaptive filtering algorithm, we have applied it to the several synthetic signals, including Blocks, Bumps, Doppler, HeaviSine, Piece-Regular, and Sing signal. These signals are chosen as they represent good models of some typical real-world data found in various engineering and signal processing applications. For example, the piecewise constant Blocks signal is used as a model of the acoustic impedance of a layered medium in geophysics or of the one-dimensional profile along certain images in image processing applications [38]. Moreover, the Bumps signal represents spectra arising in nuclear magnetic resonance (NMR), absorption, and infrared spectroscopy [38], whereas the Piece-Regular signal is similar to wave arrivals in a seismogram. Each considered signal is corrupted by the white additive Gaussian noise and studied for three cases corresponding to signal-to-noise-ratio (SNR) values of 5, 7, and 10 dB.
The RBF-RICI algorithm is also compared to the zero-order LPA-RICI, the original LPA-ICI, and the Savitzky-Golay [39] filtering algorithms. To facilitate the quantitative analysis of the tested algorithms, the following filtering quality indicators were calculated for signals of the length N k : Mean squared error (MSE): Mean absolute error (MAE): Maximum absolute error (MAXE): Peak signal-to-noise ratio (PSNR): Improvement in the signal-to-noise ratio (ISNR): The RBF-RICI algorithm, as well as the other tested algorithms, are applied with the optimal parameters for each considered signal, i.e., the parameters that minimize the estimation MSE. The optimal parameter values of the LPA-ICI algorithm are found in the range of 0 ≤ Γ ≤ 5. The Savitzky-Golay filtering algorithm is applied with the second order polynomial and the optimal window width h. The optimal parameters of the RBF-RICI and LPA-RICI algorithms are found by running the exhaustive grid search in the parameter space defined by parameters' value ranges of 0 ≤ Γ ≤ 5 and 0 ≤ R c ≤ 1. The search is conducted with the fine parameter value resolution of 10 −2 , resulting in the total number of 50,000 iterations and algorithm evaluations. To speed up this time-consuming and strenuous process, the PSO algorithm is proposed in the RBF-RICI parameters optimization. Different PSO algorithm realizations, as well as the genetic algorithm, are tested, and their performances are compared. The obtained results are presented and discussed in the rest of this section.

Blocks Signal
The original Blocks signal is shown in Figure 1a, while Figure 1b shows its noisy version, with the SNR of 7 dB. The results obtained by applying the RBF-RICI filtering algorithm, in terms of comparison of the original and the filtered signal, are presented in Figure 1c, while Figure 1d shows the filtering error. As can be seen, the RBF-RICI algorithm reduces the noise and reconstructs the main features of the original Blocks signal well, with the slightly degraded performance near the instantaneous signal value changes. The filtering results obtained by the RBF-RICI, the LPA-RICI, the LPA-ICI, and the Savitzky-Golay filtering algorithm applied to the noisy Blocks signal at SNRs of 5, 7, and 10 dB are given in Tables 1-3, respectively. The quantitative comparison is provided by calculating the filtering quality indicators MSE, MAE, MAXE, PSNR, and ISNR for the tested algorithms with the optimal parameters, i.e., the parameters that minimize the filtering MSE. The best filtering quality indicators in each table, i.e., the lowest values of MSE, MAE, and MAXE, and the highest values of PSNR and ISNR, are marked in bold. The results presented in Tables 1-3 suggest that the RBF-RICI algorithm applied to the noisy Blocks signal provides satisfactory filtering performance, reducing the filtering errors and improving the SNR of the signal. With the decrease of the SNR, the RBF-RICI algorithm's filtering quality is also somewhat reduced but remained competitive.
The filtering quality improvement of the RBF-RICI algorithm over the other tested algorithms for the Blocks signal is calculated, and the percentage values are given in Tables 4-6 Tables 4-6 suggest that, in most cases, the RBF-RICI algorithm applied to the noisy Blocks signal provides the improved or competitive filtering quality when compared to the other tested algorithms.
The runtimes have also been computed for each tested filtering algorithm. The algorithm runtimes have been obtained on a computer with the Intel Xeon CPU E5-2620 v4 @ 2.10 GHz, and 128 GB of RAM. The results have been averaged over 1000 algorithm runs. The runtimes of the filtering algorithms applied to the noisy Blocks signal at SNRs of 5, 7, and 10 dB are shown in Table 7. The presented results suggest that the proposed RBF-RICI algorithm performs competitively to the LPA-RICI and LPA-ICI algorithms in terms of execution speed, with a dependence on the selected parameters' values. However, Savitzky-Golay filtering algorithm provides significantly faster performance than the other tested algorithms.  Figure 2a shows the original Bumps signal, whose noise-corrupted version with the SNR of 7 dB is shown in Figure 2b. The comparison of the original and the RBF-RICI filtered Bumps signal is given in Figure 2c, with the filtering error shown in Figure 2d. As shown in Figure 2c, the RBF-RICI algorithm provides very good noise reduction performance. Tables 8-10 provide the filtering quality indicators obtained by the optimized RBF-RICI, LPA-RICI, LPA-ICI, and Savitzky-Golay filtering algorithms applied to the noisy Bumps signal for SNRs of 5, 7, and 10 dB, respectively. The results indicate that the RBF-RICI algorithm provides good filtering performance in terms of noise suppression and filtering error reduction. As expected, the filtering performance slightly declines with the decreasing SNR. Tables 11-13 give the relative comparison of the filtering quality obtained by the RBF-RICI algorithm and the other tested algorithms applied to the Bumps signal at SNRs of 5, 7, and 10 dB, respectively. The presented comparison indicates that the proposed RBF-RICI algorithm provides better performance than all other tested algorithms for each considered filtering quality indicator and SNR value.  The filtering algorithms' runtimes in the case of the Bumps signal filtered at SNR levels of 5, 7, and 10 dB are given in Table 14. The RBF-RICI algorithm shows performance that is significantly slower than the one obtained by the Savitzky-Golay filtering and somewhat slower than those provided by the LPA-RICI and LPA-ICI algorithms. The original and noisy (SNR = 7 dB) Doppler signals are shown in Figure 3a,b, respectively. Figure 3c shows the original and the RBF-RICI filtered Doppler signal, while the obtained filtering error is shown in Figure 3d. As shown in Figure 3c, the RBF-RICI algorithm provides somewhat poorer filtering performance in the initial part of the Doppler signal where the higher frequency oscillations are present, and the algorithm's performance significantly improves in the rest of the signal with the lower frequency content (and higher amplitudes).
The results obtained by applying the improved filtering algorithm to the noisy Doppler signal are given in Tables 15-17 for SNRs of 5, 7, and 10 dB, respectively. According to the values of the filtering quality indicators, the RBF-RICI technique provides a good filtering performance by reducing the filtering error and increasing the SNR of the signal. The percentage values describing the relative filtering quality improvement of the RBF-RICI algorithm over the LPA-RICI, the LPA-ICI, and the Savitzky-Golay filtering algorithm applied to the Doppler signal at SNRs of 5, 7, and 10 dB are given in Tables 18-20, respectively. The RBF-RICI algorithm outperforms all other tested algorithms for all filtering quality indicators and SNR values, except for the MAXE at the 7 dB SNR.
The runtimes calculated for the filtering algorithms applied to the Doppler signal at SNRs of 5, 7, and 10 dB are given in Table 21. The RBF-RICI algorithm provides the runtimes which are competitive to the ones obtained by the LPA-RICI and LPA-ICI algorithms, with a dependence on the SNR and parameters' values. These algorithms are again outperformed by the Savitzky-Golay filtering in terms of execution speed.    Figure 4a shows the original noise-free HeaviSine signal, while Figure 4b shows its noisecorrupted version at the 7 dB SNR. The comparison between the original and the RBF-RICI filtered signal is given in Figure 4c, while their difference is illustrated in Figure 4d as the filtering error. The proposed filtering algorithm provides noise reduction and estimates the original signal efficiently, with the low values of filtering error for the whole signal duration. The signal value jump at k = 307 is successfully reconstructed, in contrast to the sudden change in the signal value at k = 737 where the algorithm did not adapt fast enough. Tables 22-24 provide the filtering results for the RBF-RICI, the LPA-RICI, the LPA-ICI, and the Savitzky-Golay filtering algorithm in the case of the HeaviSine signal filtered at SNRs of 5, 7, and 10 dB, respectively. The obtained filtering quality indicators suggest that the RBF-RICI algorithm efficiently removes the noise, keeping the filtering error at low values. The filtering quality is somewhat reduced for the lower SNR values but remained satisfactory. Tables 25-27 give the comparison between the filtering quality indicators obtained by the algorithms applied to the HeaviSine signal at SNRs of 5, 7, and 10 dB, respectively. The RBF-RICI algorithm shows better filtering performance than the LPA-RICI and LPA-ICI algorithms for each SNR case, but it is outperformed by the Savitzky-Golay filtering algorithm.
The filtering algorithms' runtimes for the noisy HeaviSine signal considered at SNR levels of 5, 7, and 10 dB are shown in Table 28. The RBF-RICI algorithm is competitive to the LPA-RICI and LPA-ICI algorithms in terms of execution speed, outperforming both methods at the SNR of 10 dB. The Savitzky-Golay filtering provides the fastest performance.    The results presented in Tables 29-31 suggest that the proposed RBF-RICI algorithm applied to the noisy Piece-Regular signal provides excellent filtering performance. The performance deteriorates somewhat with decreasing SNR. As shown in Tables 32-34, the RBF-RICI algorithm outperforms all other tested algorithms applied to the Piece-Regular signal for each considered SNR. Table 35 gives the runtimes of the filtering algorithms applied to the noisy Piece-Regular signal at SNRs of 5, 7, and 10 dB. In this case, the RBF-RICI algorithm provides a slower performance when compared to the other tested algorithms.

Sing Signal
The original Sing signal and the signal with the added white Gaussian noise (SNR = 7 dB) are shown in Figure 6a,b, respectively. The comparison between the original and the filtered signal is given in Figure 6c, with the filtering error shown in Figure 6d. The Sing signal is characterized by having zero values for the most part of its duration and one sudden peak of high amplitude, making it challenging for the filtering algorithm to adapt in a short period of time. The visual inspection of the presented results suggests that the optimized RBF-RICI algorithm provides excellent filtering performance with efficient noise suppression and an almost perfect match between the original and the filtered signal for the entire signal duration. Tables 36-38 provide the results obtained by the RBF-RICI, the LPA-RICI, the LPA-ICI, and the Savitzky-Golay filtering algorithm when applied to the filtering of the noisy Sing signal at SNRs of 5, 7, and 10 dB, respectively. The analysis of the calculated filtering quality indicators suggests that the RBF-RICI algorithm efficiently removes the noise, decreasing the filtering error and increasing the SNR of the signal. The performance is slightly reduced at the lower SNR values. The comparison of the filtering quality indicators obtained by the tested algorithms applied to the Sing signal, given in Tables 39-41, suggests that the RBF-RICI algorithm shows better filtering performance than the other tested algorithms for each filtering quality indicator and each considered SNR case. Table 42 provides the filtering algorithms' runtimes in the case of the noisy Sing signal filtered at SNR levels of 5, 7, and 10 dB. The results suggest that the RBF-RICI algorithm is outperformed by the other tested algorithms in terms of the execution speed. In addition to the analysis provided above, Figure 7 shows the filtering quality measure MSE as a function of the RBF-RICI algorithm's parameters Γ and R c , for the algorithm applied to each considered noisy signal at the SNR of 7 dB. The MSE values are calculated during the grid search in the parameter space, within the range 0 ≤ Γ ≤ 5 and 0 ≤ R c ≤ 1. As can be seen, the estimation MSE, whose value determines the filtering performance, depends heavily on the proper selection of the algorithm's parameters. In order to reduce the total number of evaluations needed to find the parameters' values that minimize the obtained MSE, the parameters optimization approach following the PSO-based procedure is proposed.

Effects of the Signal Length
The effects of the signal length on the RBF-RICI algorithm's filtering performance are demonstrated on each considered test signal for the SNR of 7 dB. The obtained results are shown in Table 43. The RBF-RICI parameters are set to the values which are in the previous analysis found as optimal for each signal of the 1024 samples length. As can be seen in Table 43, the filtering performance is generally improved with the increasing signal length, i.e., the filtering errors are reduced, and the SNR is increased. The only exceptions are the MAXE which does not always show this declining trend, and the Sing signal due to its specific nature in which the high-amplitude and the narrow-width peak is not adequately registered at lower sampling rates. As expected, the algorithm runtime is increased for longer duration signals.

Parameters Optimization
In this section, we formalize a single-objective optimization problem: The optimization has been performed using the described evolutionary optimization methods with the key parameters given in Table 44.
The number of particles in the swarm, s, the maximum number of iterations, MaxIt, and the corresponding parameters have been kept equal for all considered optimization algorithms to preserve the equal number of the MSE evaluations and to facilitate mutual comparison. That is, each swarm size in the EPS-PSO and MSPSO algorithms, which use multiple swarms, has been scaled accordingly to preserve s.
The computational results, including the average (Mean), the best (Best), the worst (Worst), the standard deviation (Std. Dev.), and the median (Median), obtained by the PSO-based and GA-based optimization algorithms applied to the parameters optimization problem for each considered noisy signal at the SNR of 7 dB, are given in Table 45. The convergence results for 50 independent runs are shown in Figure 8. All considered optimization algorithms have performed well for the Blocks signal. However, the PSO-based algorithms have a slight advantage over the GA with respect to the Worst, Std. Dev., and convergence, as shown in Figure 8a. All optimization algorithms have found the global optimum multiple times in 50 independent runs.
Similarly, high optimization performance is obtained for the Bumps signal, as well. The difference is with the GA, which performs equally well as the PSO-based algorithms in this case. The exception is the MSPSO algorithm, which provides the poorest performance with respect to the Mean, Worst, and Std. Dev. Additionally, Figure 8b clearly shows poorer convergence of the MSPSO algorithm. On the other hand, the EPS-PSO shows excellent performance, as it has converged to the global optimum in each independent run. Similar to the previous case, all optimization algorithms have found the global optimum at least once in 50 independent runs.  Excellent optimization performance is obtained for the Doppler signal. All considered optimization algorithms have found the global optimum numerous times in 50 independent runs, where the PSO and the EPS-PSO have converged to the global optimum in each optimization run.
The results obtained by the considered optimization algorithms differ the most for the HeaviSine signal example. Although all optimization algorithms have found the global optimum at least once in 50 independent runs, the obtained numerical results and graphical representation of the convergence suggest that the GA and the MSPSO have performed worse than the PSO and the EPS-PSO algorithms, with the EPS-PSO highlighted again as the best performing algorithm with respect to the obtained results statistics and convergence (Figure 8d).
Another high optimization performance is obtained for the Piece-Regular signal, similarly to the Doppler signal example. The only difference is that the MSPSO has replaced the EPS-PSO as the best performing algorithm along with the PSO, which has also found the global optimum in each independent run.
The optimization results for the final signal example, the Sing, highlight the MSPSO and, for the first time, the GA as the best performing algorithms with respect to the statistics and convergence, shown in Figure 8f. All considered optimizations algorithms perform with the equal Best and Worst solutions in 50 independent runs.
To sum up, all tested optimization algorithms perform well for our optimization problem, successfully finding the global optimum for all signal examples. The larger size of the single swarm in the PSO algorithm boosts the convergence for the initial number of evaluations. However, the EPS-PSO algorithm, with a greater ability to escape from the local optimum, stands out as the best performing optimization algorithm for our signal examples.

Experimental Results for Real-Life Signals
In order to test the proposed RBF-RICI filtering algorithm in real-life conditions, we have applied it to the noisy measured maritime signals (as an example of a practical application). The measurements were obtained from a buoy located in the Atlantic Ocean, approximately 210 nautical miles west-southwest of Slyne Head at the west coast of Ireland. The data is provided by Met Éireann, Ireland's National Meteorological Service, as an openaccess dataset publicly available at https://data.gov.ie/dataset/hourly-data-for-buoy-m6 (accessed on 17 April 2021). The dataset contains hourly measurements from 2006 to the present. Figure 9a shows the noisy measurements of the sea temperature (°C), while Figure 9b,c show the sea temperature signal obtained after applying the RBF-RICI filtering algorithm and Savitzky-Golay filtering algorithm, respectively. The RBF-RICI algorithm's parameters are set to Γ = 3.2 and R c = 0.9. All real-life maritime signals considered in this analysis are filtered using the Savitzky-Golay filter with the second order polynomial and the window width set to 2% of the signal length. This particular window width setting is chosen because the widths of that order of magnitude proved to be optimal during the analysis performed for several synthetic signals.
The measurements of the significant wave height (m) are shown in Figure 10a, while Figure 10b,c show the same data after application of the RBF-RICI and Savitzky-Golay filtering algorithm, respectively. In this case, the RBF-RICI parameters are set to Γ = 3 and R c = 0.1.  Figure 11a shows the noisy measurements of the wave direction (°), whereas Figure 11b shows the wave direction signal obtained after applying the RBF-RICI filtering algorithm, whose parameters are tuned to the values Γ = 0.6 and R c = 1. Figure 11c shows the signal obtained after application of the Savitzky-Golay filter.
The measurements of the individual maximum wave height (m) are shown in Figure 12a, whereas Figure 12b shows the same data after application of the RBF-RICI filtering algorithm, whose parameters are set to the values Γ = 4.9 and R c = 0.95. The Savitzky-Golay filtered signal is shown in Figure 12c.
As shown in Figures 9-12, the RBF-RICI filtering algorithm reduces the noise level in the real-life measurements and successfully reconstructs the main morphological features of the underlying signals, performing competitively to the conventionally applied Savitzky-Golay filtering algorithm. Therefore, the application of the RBF-RICI filtering enables better observation of the useful information and trends in the measurement data, and this way filtered signals may be used for further analysis and processing. Moreover, the RBF-RICI algorithm's parameters Γ and R c may be set to the values used as optimal for the simulated signals of similar morphologies. However, these parameters can be additionally adjusted in order to achieve the different levels of the filtered signal's smoothness. The algorithm's parameters obtained by this data-driven approach may be then successfully used for the filtering of the signals of the same type and similar characteristics.

Conclusions
In this paper, we proposed an adaptive RBF-RICI filtering algorithm, whose parameters are adjusted using the PSO-based procedure. The analysis of the RBF-RICI algorithm's filtering performance was done using several synthetic noisy signals, showing that the algorithm is efficient in noise suppression and filtering error reduction. Moreover, comparing the proposed algorithm with similar filtering algorithms, we found that it shows better or competitive filtering performance in most considered test cases. Finally, we applied the proposed algorithm to the noisy measured maritime data, proving the possibility of its successful application in real-world practical applications. The possible other applications in the maritime sector include signals obtained by different ship measurement and detection sensors and systems, meteorological sea data, navigational data, etc. Moreover, the application of the proposed PSO-enhanced RBF-RICI filtering algorithm is not limited to the maritime transport sector only but may also be generalized for application in other fields dealing with nonstationary data.