Abstract
This paper describes methods for optimal filtering of random signals that involve large matrices. We developed a procedure that allows us to significantly decrease the computational load associated with numerically implementing the associated filter and increase its accuracy. The procedure is based on the reduction of a large covariance matrix to a collection of smaller matrices. This is done in such a way that the filter equation with large matrices is equivalently represented by a set of equations with smaller matrices. The filter we developed is represented by and minimizes the associated error over all matrices . As a result, the proposed optimal filter has two degrees of freedom that increase its accuracy. They are associated, first, with the optimal determination of matrices and second, with an increase in the number p of components in the filter. The error analysis and results of numerical simulations are provided.
Keywords:
large covariance matrices; least squares linear estimate; singular value decomposition; error minimization MSC:
62M20; 65C60; 15A09
1. Introduction
Preliminaries
Although there is an extensive literature on studying methods for filtering of random signals, the problem we consider seems to be different from the known ones. The objective of this paper is to formulate and solve the filtering problem under scenarios that relate to a reduction of computation complexity and an increase in the filtering accuracy for large covariance matrices.
Let be a set of outcomes in probability space for which is a –algebra of measurable subsets of and is an associated probability measure. Let be a signal of interest and be an observable signal. Here, is the space of square-integrable functions defined on with values in . We write where , for where . W.l.o.g., we assume that all random vectors have zero mean. Let us denote
where is the covariance matrix formed from and . Let the filter under consideration be represented by a bounded linear transformation such that , for each , where . Henceforth, we will simply write in each case.
For a matrix , we denote its Moore–Penrose inverse, also known as the pseudoinverse, by . Matrices and represent the identity and zero matrices, respectively, with appropriate dimensions in each context. The Frobenius norm of matrix is denoted by .
The well-known problem of finding the linear optimal filter is formulated as follows. Given large covariance matrices and , find that solves
The minimal Frobenius norm solution to (2) is given by (see, for example, [,,,,])
2. Statement of the Problem
The problem we consider is as follows. Given large and , find a computational procedure for solving
that significantly decreases the computational complexity for the evaluation compared to that needed for the filter represented in (3).
The problem in (4) can also be interpreted as a system-identification problem [,,]. In this case, are the inputs of the system, is the input-output map of the system, and is the system output. For example, in environmental monitoring, may represent random data coming from nodes measuring variations in temperature, light or pressure and may represent a random signal obtained after the received data have been merged in order to estimate required data parameters to within a prescribed degree of accuracy. Similar scenarios arise in target localization and tracking [].
As mentioned above, the covariance matrices are assumed to be known. This assumption is a well-known and significant limitation in problems dealing with the optimal filtering of random signals. See [,] in this regard. The covariances can be estimated in various ways, and particular techniques are given in many papers. We cite [,,,,,] as examples. Estimation of covariance matrices is an important problem that is not considered in this paper.
Motivation—Relation to Known Techniques
In (3), dimensions m and n, of and , respectively, can be very large. In a number of important applied problems, such as those in biology, ecology, finance, sociology and medicine (see e.g., [,]), and greater. For example, measurements of gene expression can contain tens of hundreds of samples. Each sample can contain tens of thousands of genes. As a result, in such cases, associated covariance matrices in (3) are very large. Computation of a large pseudo-inverse matrix results in a quite slow computing procedure and can cause the computer to run out of memory. In particular, the associated well-known phenomenon of the curse of dimensionality [] states that many problems become exponentially difficult in high dimensions. Therefore, our main motivation is to develop a method that decreases the computational load associated with the optimal filter in the case of high-dimensional random signals.
We also show that the proposed method allows us to increase the associated accuracy compared to that of the direct method.
Differences between the proposed procedure and known related methods are as follows. The recursive pseudo-inverse computation [,,] avoids the direct recomputation of a large matrix pseudo-inverse using iterative computation of increasing matrix horizontal blocks. The pseudo-inverse of the first block of the sequence is assumed to be known. The computational cost remains high in high-dimensional settings. This method does not improve the associated accuracy as the proposed filter does when the degree p increases.
Randomized algorithms, such as those based on low-rank approximation via random sampling (e.g., Randomized SVD), provide approximate solutions to regression or filtering problems with fewer operations. Some references are given in [,,]. These algorithms involve nondeterministic error, whereas the method in this paper is exact. The accuracy of the randomized algorithms strongly depends on the number of random samples and can be unstable. The algorithm proposed in this paper systematically improves accuracy by increasing the number of blocks p while maintaining error control.
Techniques based on sparse matrix factorization exploit matrix sparsity to reduce computational complexity [,,]. Their effectiveness depends on the matrix being naturally sparse; otherwise, the advantage is lost. The approach in this paper does not require any matrix sparsity.
Further, in the case of high-dimensional signals, some other known dimensionality-reduction techniques can be used (as those, for example, in [,,]) as an intermediate step for the filtering. At the same time, those techniques also involve large pseudo-inverse covariance matrices of the same size as that in (3). Additionally, when the dimensionality-reduction procedure is applied, intrinsic associated errors appear. The errors may involve, in particular, an undesirable distortion of the data. Thus, an application of the dimensionality-reduction techniques to the problem under consideration does not allow us to avoid computation of large pseudo-inverse covariance matrices. Therefore, we wish to develop techniques that reduce computation of large pseudo-inverses and without involving any additional associated error.
The following example illustrates the motivating problem. Simulations in this example and also in Example 2 of Section 4.1.7 were run on a Dell Latitude 7400 Business Laptop (manufactured in 2020) with CPU (central processing unit) parameters as follows: processor type, Intel Core i7 8th Gen, 8665U; processor speed, 1.90 GHz (TurboBoost up to 4.80 GHz) and memory, 16 GB DDR4 SDRAM.
Example 1.
Let be a matrix whose entries are normally distributed with mean 0 and variance 1. In Table 1 and Table 2, we provide the time (in seconds) needed for Matlab R2024b to compute matrix for different values of q and for two different commands to compute the pseudo-inverse, pinv and lsqminnorm.
Table 1.
Time needed for Matlab to compute for different dimensions q using the command pinv.
Table 2.
Time needed for Matlab to compute for different dimensions q using the command lsqminnorm.
We observe that computation of, e.g., a pseudo-inverse matrix by commands pinv and lsqminnorm requires about 63 or 70 s, respectively, while computation of the five pseudo-inverse matrices of sizes requires about s or s, respectively. Thus, a procedure that would allow us to reduce the computation of to a computation of, say, five pseudo-inverse matrices of sizes would be faster. At the same time, such a procedure would additionally involve associated matrix multiplications that are also time-consuming. This observation is further developed in Section 4.1.2 and Section 4.1.7.
3. Contribution and Novelty
The main conceptual novelty of the approach developed in this paper is in the technique that allows us to decrease the computational load needed for the numerical realization of the proposed filter and reduce the associated memory usage. The procedure is based on the reduction of a large matrix to a collection of smaller matrices. This is done such that the filter equation with large matrices is equivalently represented by a set of equations with smaller matrices. As a result, as shown in Section 4.1.7, the proposed p-th-degree filter requires computation of p pseudo-inverse matrices of sizes much smaller than the size of the large matrix . In this regard, see Section 4.1.2 for more details. The associated Matlab code is given in Section 4.1.7.
In Section 2, Section 3 and Section 4, we also show that the accuracy of the proposed filter increases with increases in the degree of the filter. Details are provided in Section 4.1.2. In particular, the algorithm for the optimal determination of is given in Section 4.1.7. Note that the algorithm is easy to implement numerically. The error analysis associated with the filter under consideration is provided in Section 4.1.3, Section 4.1.5 and Section 4.1.8.
Remark 1.
Note that for every outcome , realizations and of signals and , respectively, occur with certain probabilities. Thus, random signals , for and are associated with infinite sets of realizations and , respectively. For different values of ω, the values , for and are, in general, different, and in many cases will span the entire spaces and , respectively. Therefore, for each value of and , for , the filter under consideration can be interpreted as map where probabilities are assigned to each column of and . Importantly, the filter is invariant with respect to outcome , and therefore is the same for all and with different outcomes .
4. Solution of the Problem
We wish to assume in (4) that is never the zero vector. To this end, we need to extend the known definition of the linear independence of vectors, as follows.
Let be some matrices. For , let be the null space of matrix .
A linear combination in the generalized sense of vectors is a vector
The linear combination is considered nontrivial if for at least one of Note that if , then it is still, of course, possible that . Thus, condition is more general than condition .
Definition 1.
Random vectors are called linearly independent in the generalized sense if there is no nontrivial linear combination in the generalized sense of these vectors equal to the zero vector; in other words, if
for almost all , implies that , for each , and almost all .
All random vectors considered below are assumed to be linearly independent in the generalized sense.
4.1. Solution of Problem in (4)
In (3), matrix is the minimum Frobenius norm solution of the equation
Here, we exploit the specific structure of the original system (7) to develop a special block elimination procedure that allows us to reduce the original system of Equation (7) to a collection of independent smaller subsystems. Their solution implies less computational load than that needed for (3).
We provide a specific solution of Equation (9) in the following way. First, in Section 4.1.1, a generic basis for the solution is given. In Section 4.1.4, the solution of Equation (9) is considered for . This is a preliminary step in the solution of Equation (9) for an arbitrary p provided in Section 4.1.6.
4.1.1. Generic Basis for the Solution of Problem (7): Case in (9). Second-Degree Filter
To provide a generic basis for the solution of Equation (9), let us consider the case in (4), i.e., the second-degree filter. Then, (9) becomes
We denote
Recall, it has been shown in [] that for any random vectors g and h,
Proof.
The latter implies (13). □
Remark 2.
Theorem 1.
For the minimal Frobenius norm solution of problem (7) is represented by
Proof.
It follows from (23) that the second-degree filter requires computation of two pseudo-inverse matrices, and , of sizes that are smaller than the size of (if, of course, and ).
4.1.2. Decrease in Computational Load
We wish to compare the computational load needed for a numerical realization of the second-degree filter given, for , by (23), and that of the filter represented by (3).
By ([], p. 254), the product of and matrices requires, for large and q, flops. For large q, the Golub–Reinsch SVD used for computation of the matrix pseudo-inverse requires flops (see [], p. 254).
In (11), for , the evaluation of implies the evaluation of , matrix multiplications in and matrix subtraction. These operations require flops, flops and flops, respectively. Similarly, computation of requires flops. In (23), computation of and imply flops and flops, respectively.
Let us denote by a number of flops required to compute an operation . In total, the second-degree filter requires flops where
Thus, the number of flops needed for the numerical realization of the second-degree filter is equal to .
The computational load required by the filter given by (3) is flops. It is customary to choose . In this case, clearly, . In particular, if , then the latter inequality becomes That is, for sufficiently large m and q, the proposed second-degree filter requires, roughly, up to times less flops than that by the filter given by (3). In simulations by Matlab provided in Example 2 below, the second-degree filter takes about half the time in seconds taken by the filter given by (3).
In Section 4.1.4 and Section 4.1.6, the procedure described in Section 4.1.1 is extended to the case of filters of higher degrees. For given n, this will imply, obviously, smaller sizes of blocks of matrices and than those for the second-degree filter, and as a result, a further decrease in the associated computational load.
4.1.3. Error Associated with the Second-Degree Filter Determined by (23)
Now, we wish to show that the error associated with the second-degree filter determined for by (23) is less than the error associated with the filter determined by (3).
Let us denote
Theorem 2.
Let and be such that
where and . Then,
Proof.
It is known (see, for example, [,,,,]) that, for M determined by (3), and for and determined by (23),
where and, bearing in mind (2), . Therefore, by (23),
Then,
At the same time,
Remark 3.
The condition in (29) can, of course, be practically satisfied. The following Corollary 1 demonstrates the case.
Corollary 1.
Let and be a nonzero signal. Then, (30) is always true.
4.1.4. Solution of Equation (9) for
To clarify the solution device of Equation (9) for an arbitrary p, let us now consider a particular case with . The solution is based on the development of the procedure represented by (21) and (22).
For , Equation (9) is as follows:
First, consider a reduction of (35) to the equation with the block-lower triangular matrix.
Step 1. By (12), for , . Define
Now, in (35), for and , update block-matrices and to and , respectively, so that
and
where , for and , and , for . Note that by (12).
Step 2. Set
By (12), . Taking the latter into account, update block-matrices and in the LHS of (38), for and , so that
and
where and .
As a result, we finally obtain the updated matrix equation with the block-lower triangular matrix
Solution of Equation (42). Using a back substitution, Equation (42) is represented by the three equations, as follows:
where and are the original blocks in (35).
It follows from (43) that the solution of Equation (9), for , is represented by the minimal Frobenius norm solutions to the problems
where and are solutions to the problems in (44) and (45), respectively. Matrices , for , that solve (44)–(46) are as follows:
Thus, the third-degree filter requires computation of three pseudo-inverse matrices, , and .
4.1.5. Error Associated with the Third-Degree Filter Determined by (47)–(49)
We wish to show that the error associated with the filter determined by (47)–(49), is less than the error associated with the filter determined by (3).
To this end, let us denote
where . As before, and .
Theorem 3.
Let , and be such that
Then,
Proof.
Corollary 2.
If and at least one of or is a nonzero vector, then (52) is always true.
4.1.6. Solution of Equation (9) for Arbitrary p
Here, we extend the procedure considered in Section 4.1.4 to the case of an arbitrary p. First, we reduce the Equation (9) to an equation with the block-lower triangular matrix. We do this by the following steps, where each step is justified by an obvious extension of (1).
Step 1. First, in (9), for and , we wish to update blocks and to and , respectively. To this end, define
Step 2. Now, we wish to update blocks and , for and , of the matrices in the LHS of (56) and (57), respectively.
Set
We continue this updating procedure up to the final Step . In Step , the following matrices are obtained:
and
Step . By (12),
4.1.7. Solution of Equation (9)
In (76)–(78), are solutions to the problems in (75)–(77), respectively. On the basis of [], the minimal Frobenius norm solutions to (75)–(78) are given by
Thus, the p-th-degree filter requires computation of p pseudo-inverse matrices, , , .
Algorithm 1 is based on formulas (58), (59), (63), (64), (71), (72) and (79)–(82).
| Algorithm 1: Solution of Equation (9) |
![]() |
Note that the algorithm is quite simple. Its output is constructed from the successive computation of matrices and .
- In line 5, matrix is calculated as in (79).
Importantly, is the matrix with ; therefore, it requires less computation than pseudo-inverse .
Example 2.
Covariance matrices and are estimated by and , respectively, where , , and , for all , and s is the number of samples. Entries of and are normally distributed with mean 0 and variance 1. We choose , , and for .
In Table 3 and Table 4 and Figure 1 and Figure 2, for , the computation times (in seconds) needed by the above algorithm and the code (see Appendix A) for different values of dimension n are presented. The results represented in Table 3 and Figure 1 were obtained with the command pinv used for calculation of the pseudoinverses. The results in Table 4 and Figure 2 were obtained with the command lsqminnorm.
Table 3.
Numerical results for Example 2 obtained using the command pinv.
Table 4.
Numerical results for Example 2 obtained using the command lsqminnorm.
Figure 1.
Graphical representation of the results given in Table 3.
Figure 2.
Graphical representation of the results given in Table 4.
In Figure 3, the values of the error of the proposed method are represented for and . It follows from Figure 3 that the accuracy of the proposed method increases when p increases. Note that for , the proposed filter coincides with the known filter represented by (3).
Figure 3.
Graphical representation of the errors.
Recall that the simulations were performed on a Dell Latitude 7400 Business Laptop (manufactured in 2020). The time decreases with the increase in p. In other words, the computational load needed for the numerical realization of the proposed p-th-degree filter decreases with the increase in degree p. Note that the proposed p-th-degree filter requires computation of p pseudo-inverse matrices and also matrix multiplications determined by the above algorithm. As a result, the proposed method is faster than that of the known filter in (3) and also requires less memory usage. In particular, for and the proposed filter is around four times faster than the filter defined by (3).
4.1.8. The Error Associated with the Filter Represented by (79)–(82)
Let us denote by
the error associated with the filter determined by (79)–(82). We wish to analyze the error.
Theorem 4.
Let be a nonzero vector. Then, the error decreases if the degree of the proposed filter p increases, i.e.,
Proof.
Remark 4.
Corollary 3.
If and at least one of is not the zero vector, then (87) is always true.
5. Conclusions
In a number of important applications, such as those in biology and medicine, associated random signals are high-dimensional. Therefore, covariance matrices formed from those signals are high-dimensional as well. Large matrices also arise because of the filter structure. As a result, the numerical realization of a filter that processes high-dimensional signals might be very time-consuming, and in some cases, might cause the computer to run out of memory. In this paper, we have presented a method of optimal filter determination which targets the case of high-dimensional random signals. In particular, we have developed a procedure that significantly decreases the computational load needed to numerically realize the filter (see Section 4.1.7 and Example 2). The procedure is based on reduction of the filter equation with large matrices to an equivalent representation by a set of equations with smaller matrices. The associated algorithm and Matlab code have been provided. The algorithm is easy to implement numerically. We have also shown that filter accuracy is improved with an increase in filter degree, i.e. in the number of filter parameters (see (4)).
The results demonstrate that in settings with high-dimensional random signals, a number of existing applications may benefit from the use of the proposed technique. The Matlab code is given in Appendix A. In [], the technique provided here is extended to the case of filtering that is optimal with respect to minimization of both matrices and random vectors .
Author Contributions
Conceptualization, P.H., A.T. and P.S.-Q.; methodology, P.H. and A.T.; software, A.T. and P.S.-Q.; validation, P.H. and A.T.; writing—original draft, A.T.; visualization, P.S.-Q. All authors have read and agreed to the published version of the manuscript.
Funding
This research received no external funding.
Data Availability Statement
The raw data supporting the conclusions of this article will be made available by the authors on request.
Acknowledgments
This work was supported by Vicerrectoría de Investigación y Extensión from Instituto Tecnológico de Costa Rica (Research #1440054).
Conflicts of Interest
The authors declare no conflicts of interest.
References
- Fomin, V.N.; Ruzhansky, M.V. Abstract Optimal Linear Filtering. SIAM J. Control. Optim. 2000, 38, 1334–1352. [Google Scholar] [CrossRef]
- Hua, Y.; Nikpour, M.; Stoica, P. Optimal reduced-rank estimation and filtering. IEEE Trans. Signal Process. 2001, 49, 457–469. [Google Scholar]
- Brillinger, D.R. Time Series: Data Analysis and Theory; Society for Industrial and Applied Mathematics: Philadelphia, PA, USA, 2001. [Google Scholar]
- Torokhti, A.; Howlett, P. An Optimal Filter of the Second Order. IEEE Trans. Signal Process. 2001, 49, 1044–1048. [Google Scholar] [CrossRef]
- Scharf, L. The SVD and reduced rank signal processing. Signal Process. 1991, 25, 113–133. [Google Scholar] [CrossRef]
- Stoica, P.; Jansson, M. MIMO system identification: State-space and subspace approximation versus transfer function and instrumental variables. IEEE Trans. Signal Process. 2000, 48, 3087–3099. [Google Scholar] [CrossRef]
- Billings, S.A. Nonlinear System Identification—Narmax Methods in the Time, Frequency, and Spatio-Temporal Domains; John Wiley and Sons, Ltd.: New York, NY, USA, 2013. [Google Scholar]
- Schoukens, M.; Tiels, K. Identification of Block-oriented Nonlinear Systems Starting from Linear Approximations: A Survey. Automatica 2017, 85, 272–292. [Google Scholar] [CrossRef]
- Li, D.; Wong, K.D.; Hu, Y.H.; Sayeed, A.M. Detection, classification, and tracking of targets. IEEE Signal Process. Mag. 2002, 19, 17–29. [Google Scholar]
- Mathews, V.J.; Sicuranza, G.L. Polynomial Signal Processing; John Willey & Sons, Inc: New York, NY, USA, 2001. [Google Scholar]
- Marelli, D.E.; Fu, M. Distributed weighted least-squares estimation with fast convergence for large-scale systems. Automatica 2015, 51, 27–39. [Google Scholar] [CrossRef] [PubMed]
- Perlovsky, L.; Marzetta, T. Estimating a covariance matrix from incomplete realizations of a random vector. IEEE Trans. Signal Process. 1992, 40, 2097–2100. [Google Scholar] [CrossRef]
- Ledoit, O.; Wolf, M. A well-conditioned estimator for large-dimensional covariance matrices. J. Multivar. Anal. 2004, 88, 365–411. [Google Scholar] [CrossRef]
- Ledoit, O.; Wolf, M. Nonlinear shrinkage estimation of large-dimensional covariance matrices. Ann. Stat. 2012, 40, 1024–1060. [Google Scholar] [CrossRef]
- Vershynin, R. How Close is the Sample Covariance Matrix to the Actual Covariance Matrix? J. Theor. Probab. 2012, 25, 655–686. [Google Scholar] [CrossRef]
- Won, J.H.; Lim, J.; Kim, S.J.; Rajaratnam, B. Condition-number-regularized covariance estimation. J. R. Stat. Soc. Ser. 2013, 75, 427–450. [Google Scholar] [CrossRef]
- Schneider, M.K.; Willsky, A.S. A Krylov Subspace Method for Covariance Approximation and Simulation of Random Processes and Fields. Multidimens. Syst. Signal Process. 2003, 14, 295–318. [Google Scholar] [CrossRef]
- Leclercq, M.; Vittrant, B.; Martin-Magniette, M.L.; Boyer, M.P.S.; Perin, O.; Bergeron, A.; Fradet, Y.; Droit, A. Large-Scale Automatic Feature Selection for Biomarker Discovery in High-Dimensional OMICs Data. Front. Genet. 2019, 10, 452. [Google Scholar] [CrossRef] [PubMed]
- Artoni, F.; Delorme, A.; Makeig, S. Applying dimension reduction to EEG data by Principal Component Analysis reduces the quality of its subsequent Independent Component decomposition. Neuroimage 2018, 175, 176–187. [Google Scholar] [CrossRef] [PubMed]
- Bellman, R.E. Dynamic Programming, 2nd ed.; Courier Corporation: North Chelmsford, MA, USA, 2003. [Google Scholar]
- Janakiraman, P.; Renganathan, S. Recursive computation of pseudo-inverse of matrices. Automatica 1982, 18, 631–633. [Google Scholar] [CrossRef]
- Olivari, M.; Nieuwenhuizen, F.M.; Bülthoff, H.H.; Pollini, L. Identifying time-varying neuromuscular response: A recursive least-squares algorithm with pseudoinverse. In Proceedings of the 2015 IEEE International Conference on Systems, Man, and Cybernetics, Hong Kong, China, 9–12 October 2015; IEEE: Hoboken, NJ, USA, 2015; pp. 3079–3085. [Google Scholar]
- Chen, Z.; Ding, S.X.; Luo, H.; Zhang, K. An alternative data-driven fault detection scheme for dynamic processes with deterministic disturbances. J. Frankl. Inst. 2017, 354, 556–570. [Google Scholar] [CrossRef]
- Feng, X.; Yu, W.; Li, Y. Faster matrix completion using randomized SVD. In Proceedings of the 2018 IEEE 30th International Conference on Tools with Artificial Intelligence (ICTAI), Volos, Greece, 5–7 November 2018; IEEE: Hoboken, NJ, USA, 2018; pp. 608–615. [Google Scholar]
- Zhang, J.; Erway, J.; Hu, X.; Zhang, Q.; Plemmons, R. Randomized SVD methods in hyperspectral imaging. J. Electr. Comput. Eng. 2012, 2012, 409357. [Google Scholar] [CrossRef]
- Drineas, P.; Mahoney, M.W. A randomized algorithm for a tensor-based generalization of the singular value decomposition. Linear Algebra Its Appl. 2007, 420, 553–571. [Google Scholar] [CrossRef]
- Neyshabur, B.; Panigrahy, R. Sparse matrix factorization. arXiv 2013, arXiv:1311.3315. [Google Scholar]
- Qiu, J.; Dong, Y.; Ma, H.; Li, J.; Wang, C.; Wang, K.; Tang, J. NetSMF: Large-scale network embedding as sparse matrix factorization. In Proceedings of the World Wide Web Conference, San Francisco, CA, USA, 13–17 May 2019; pp. 1509–1520. [Google Scholar]
- Wu, K.; Guo, Y.; Zhang, C. Compressing deep neural networks with sparse matrix factorization. IEEE Trans. Neural Netw. Learn. Syst. 2019, 31, 3828–3838. [Google Scholar] [CrossRef] [PubMed]
- Golub, G.; Van Loan, C. Matrix Computations, 4th ed.; Johns Hopkins Studies in the Mathematical Sciences; Johns Hopkins University Press: Baltimore, MD, USA, 2013. [Google Scholar]
- Friedland, S.; Torokhti, A. Generalized Rank-Constrained Matrix Approximations. SIAM J. Matrix Anal. Appl. 2007, 29, 656–659. [Google Scholar] [CrossRef][Green Version]
- Torokhti, A.; Soto-Quiros, P. Improvement in accuracy for dimensionality reduction and reconstruction of noisy signals. Part II: The case of signal samples. Signal Process. 2019, 154, 272–279. [Google Scholar] [CrossRef]
Disclaimer/Publisher’s Note: The statements, opinions and data contained in all publications are solely those of the individual author(s) and contributor(s) and not of MDPI and/or the editor(s). MDPI and/or the editor(s) disclaim responsibility for any injury to people or property resulting from any ideas, methods, instructions or products referred to in the content. |
© 2025 by the authors. Licensee MDPI, Basel, Switzerland. This article is an open access article distributed under the terms and conditions of the Creative Commons Attribution (CC BY) license (https://creativecommons.org/licenses/by/4.0/).


