Next Article in Journal
Meta-Transfer Learning Driven Tensor-Shot Detector for the Autonomous Localization and Recognition of Concealed Baggage Threats
Previous Article in Journal
Study on Multi-GNSS Precise Point Positioning Performance with Adverse Effects of Satellite Signals on Android Smartphone
 
 
Font Type:
Arial Georgia Verdana
Font Size:
Aa Aa Aa
Line Spacing:
Column Width:
Background:
Article

A Novel Reconstruction Method for Measurement Data Based on MTLS Algorithm

1
School of Mechanical Engineering and Automation, Fuzhou University, Fuzhou 350108, China
2
Centre for Precision Technologies, University of Huddersfield, Huddersfield HD1 3DH, UK
3
CAS Key Laboratory of Mechanical Behaviour and Design of Materials, Department of Modern Mechanics, University of Science and Technology of China, Hefei 230022, China
*
Author to whom correspondence should be addressed.
Sensors 2020, 20(22), 6449; https://doi.org/10.3390/s20226449
Submission received: 10 October 2020 / Revised: 4 November 2020 / Accepted: 6 November 2020 / Published: 12 November 2020
(This article belongs to the Section Physical Sensors)

Abstract

:
Reconstruction methods for discrete data, such as the Moving Least Squares (MLS) and Moving Total Least Squares (MTLS), have made a great many achievements with the progress of modern industrial technology. Although the MLS and MTLS have good approximation accuracy, neither of these two approaches are robust model reconstruction methods and the outliers in the data cannot be processed effectively as the construction principle results in distorted local approximation. This paper proposes an improved method that is called the Moving Total Least Trimmed Squares (MTLTS) to achieve more accurate and robust estimations. By applying the Total Least Trimmed Squares (TLTS) method to the orthogonal construction way in the proposed MTLTS, the outliers as well as the random errors of all variables that exist in the measurement data can be effectively suppressed. The results of the numerical simulation and measurement experiment show that the proposed algorithm is superior to the MTLS and MLS method from the perspective of robustness and accuracy.

1. Introduction

Nowadays, benefitting from the development of reverse engineering and computer technology, the meshless method widely used for reconstructing the discrete data has been studied by varieties of scholars, and consequently different types of meshless methods have been proposed [1,2]. Among all the numerical methods, the meshless method obtains the local approximants of the entire parameter domain only based on the nodal points instead of elements [3,4]. In view of its outstanding features, it has replaced traditional estimation methods in some research fields [5,6]. The meshless methods that have been widely used include the Moving Least Squares (MLS), the smoothed particle hydrodynamics, the radial basis function, etc., in which the MLS method is one of the most popular methods [7].
After years of development, the MLS method has already been employed to solve engineering and scientific problems in many fields [8,9,10]. For example, Dabboura et al., used the MLS method to acquire the result of the Kuramoto–Sivashinsky equation [11]. Amirfakhrian and Mafikandi applied the MLS method to approximate the parametric curves to verify the reliability [7]. Lee [12] proposed an improved moving least-squares algorithm to approximate a set of unorganized points with a smooth curve without self-intersections, in which Euclidean minimum spanning tree, region expansion and refining iteration were used. Then, Belytschko et al. [13] first presented the Element-Free Galerkin (EFG) method by combining the weak form of the Galerkin and MLS method, which can obtain the approximation function by using only the nodes and has been applied to fracture and elasticity problems. The advantages of the EFG method are to avoid the problem of volumetric locking and improve the convergence rate and stability compared with the Finite Element Method (FEM). This method is still an important method in the current application of MLS method. For example, the EFG method was employed in solving the Signorini boundary value problems by Li et al. [14]. The difficulty of nonlinear inequality restrictions is solved by a projection iterative scheme. Compared to the FEM, this scheme improves the convergence rate as well as calculation accuracy. After the EFG method, the MLS based meshless method has been developed rapidly, deriving many classic methods [15,16], for example, the meshless local Petrov–Galerkin method [7], the improved element-free Galerkin method [17], the moving least square reproducing kernel method [18], and the direct meshless local Petrov-Galerkin method [19].
Although the MLS has been widely used due to its good fitting features [20,21], it also has its own drawbacks, especially in approximating curves and surfaces. As a reconstruction method, the local coefficients are obtained by least squares (LS) method which assumes that the error exists in the dependent variable of the measurement data [22,23]. To take the random errors of all variables into account [24,25], Scitovski et al. [26] generalized the traditional algorithms based on the work of Lancaster et al. and put forward the Moving Total Least Squares (MTLS) method. According to data processing and error theory, the MTLS method is logical to handle the errors-in-variables (EV) model. Nevertheless, neither of these two approaches is a robust method for discrete data reconstruction [10,27]. Even though there is only one outlier in the discrete data, the accuracy of the reconstruction will be seriously affected. In the practical application of 3D digital model reconstruction, the data is always associated with errors that are generated in both data collection and processing [28]. Therefore, the reconstruction results have different degrees of deviation varying with the degrees of data pollution.
Since the occurrence of outliers in the data is inevitable, it is critical to define and process them to control the negative effects on geometric model reconstruction so that the fitted data can better reflect the true information [29]. There are two common approaches for handling outliers. One approach is to set a threshold to remove the data with larger values, which may lead to the loss of true information [30]. The other is to correct them by changing the weights of the outliers, which brings the risk of different degrees of contamination. Specifically, the threshold in the former approach, obtained by probability and statistics [31], directly determines whether the reconstructed model can reflect the geometric feature information of the true object. Therefore, the selection of threshold must be reasonable, which is actually hard to achieve under different conditions. In the latter approach, adding small weights to outliers in the measurement data is difficult to achieve as well because how to exactly predict the generated impact of the method will be an issue [32].
To suppress the influence of outliers, we propose an improved reconstruction algorithm called the Moving Total Least Trimmed Squares (MTLTS) method for accurate curve and surface profile analysis, in which the TLTS method using Singular Value Decomposition (SVD) [33] is employed to deal with the abnormal data in the influence domain. The remainder of the paper is structured as follows: in the Section 2, a description of the MTLS and MLS method is drawn briefly; the Section 3 introduces the principle of the MTLTS method; in the Section 4, we give the results of numerical simulation and measurement experiment and make a brief analysis; lastly, we show the conclusions in the Section 5.

2. The Introduction of the Basic Theory

2.1. MLS Method

The conventional LS method considered as a global method, especially in the field of coordinate metrology, has been an actual standard for curve fitting. However, it cannot express the local feature information accurately when the model is complicated. Compared with the LS method, the MLS method is similar to the piecewise method, in which each estimated point domain corresponds to a segmentation. If the local data is sufficient to reflect the true feature information, the MLS method has good local approximation property and the ability of low order fitting [34].
Consider θ {θ1, θ2, …, θN} and ϑ{ϑ1, ϑ2, …, ϑN} are nodes in a bounded region Ω in space RD [35]. The approximation function fh for each point θ in the MLS method is defined as
f h ( θ ) = j = 1 m b j ( θ ) a j ( θ ) = b T ( θ ) a ( θ )
where a(θ) = [a1(θ), a2(θ), …, am(θ)]T is a vector of unknown coefficient aj(θ), (j = 1, 2, …, m), b(θ) is a vector of the basis bj(θ), and all of their dimension is m. In view of the low order fitting characteristics, we only consider the most commonly used linear least squares estimation. In this article, the basis functions of curve and surface reconstruction are b = [1, θ]T and b = [1, θ, ϑ]T respectively.
To obtain the unknown optimal parameter vector a(θ), the MLS method solves it by determining the minimum sum of absolute differences of all nodes between f(θ) and fh(θ) function [36]. We know that the error function is based on Equation (1), in which the independent variable is θ. The function is given as
J = I = 1 n w ( | θ θ I | / r ) [ f h ( θ ) f ( θ I ) ] 2 = I = 1 n w ( | θ θ I | / r ) [ j = 1 m b j ( θ I ) a j ( θ ) f ( θ I ) ] 2 = ( B a f ) T W ( B a f )
where
W = d i a g ( w 1 ( s ) ,   w 2 ( s ) ,   ,   w n ( s ) ) f = [ f ( θ 1 ) ,   f ( θ 2 ) ,   ,   f ( θ n ) ] T B = [ b 1 ( θ 1 ) b 2 ( θ 1 ) b m ( θ 1 ) b 1 ( θ 2 ) b 2 ( θ 2 ) b m ( θ 2 ) b 1 ( θ n ) b 2 ( θ n ) b m ( θ n ) ]
and s = |(θθI)|/r. r represents the radius of the influence domain. The weight function w(s) is used to ensure a good local approximation. The value of w(s) has decreasing property with the distance between the fitting point and the nodal points and makes sure that the value of the fitting point will be affected only by these points in the influence domain. Many types of functions can meet this requirement [37], so the selection of the weight function is not fixed but determined by the accuracy under the conditions of continuity and smoothness. This article adopts the following function
w ( s ) = { 1 6 s 2 + 8 s 3 3 s 4   | s | 1 0   | s | > 1
This weight function is shown in Figure 1.
The weight function plays an important role in the local approximants, which provides weight values for the points in the influence domain, as shown in Figure 2.
The weight of each point in the influence domain can be determined by the projection distance from this point to nodal point, which ensures that the approximation is globally continuous and the shape functions satisfy the compatibility condition.
Obtain the partial derivative of Equation (2) and make it equal 0, i.e.,
J a = B T W B a B T W f = 0
The sum of errors of MLS approximation function gets extreme value. Then, the optimal coefficient vector is obtained
a ( θ ) = Μ 1 ( θ ) L ( θ ) f
where
M ( θ ) = B T W B   ,   L ( θ ) = B T W
Substitute Equation (4) into Equation (1), it obtains
f h ( θ ) = b T ( θ ) a ( θ ) = b T ( θ ) M 1 ( θ ) L ( θ ) f

2.2. MTLS Method

The MLS method gains the local approximation coefficients by using the least squares estimation, which considers that the model of the random error is the Gauss–Markov (GM) model. Then, R. Scitovski et al. [26] proposed the MTLS method to process the random errors that exist in all variables.
Suppose that (xj, yj), j = 1, …, n is a set of data in the curve y = f(x). When errors occur to all the variables of measurement data, gain the local approximation parameters (c0, c1) ∈R of the function
y j = f ( x j + δ j ) + ε j
in the sense of TLS estimation. Unlike the MLS method, the MTLS method gains the coefficients through determining the minimum sum of weighted squared orthogonal distances. In the actual measured data, random errors always exist in both dependent and independent variables of data. According to the error theory, the MTLS method is more logical to process the EV model than the MLS method [28,38].

3. Proposed MTLTS Method

3.1. LTS Method

A brief introduction is given to the LTS method of the polynomial model in this part. Get a set of data (xj, yj), j = 1, ..., n in the curve y = f(x). Then, we can express its model [39] as
Y = X β t + e
where X is a n × p matrix, βt (βtβ, t = 1, 2, …, C n P ) is a p × 1 regression coefficient vector and e is the error matrix. For each parameter vector βt, the residual vector is defined as rt = Yt. The unknown rt is a n-dimensional vector whose square is defined in ascending order as r t 2 = [ r 1 2 , r 2 2 , …, r n 2 ], (0 ≤ r 1 2 ≤…≤ r j 2 ≤…≤ r n 2 ). Rousseeuw [40] first introduced LTS estimator, and its expression is defined as follows
β L T S = arg min β t β j = 1 h r j 2
where the value of the trimming constant h∈ (n/2, n) depends on the degree of data pollution [41]. In the calculation, the integers h equals (n + p + 1)/2 and P = p + 1. It is known to us that the breakdown point is the most basic standard to judge whether an estimator is robust enough or not. When h = n/2, the breakdown point of the LTS estimator is up to 1/2. Especially, when h is equal to n, it corresponds to the least squares estimation and its breakdown point is close to zero [42]. This means that the modelling process can automatically eliminate (nh) larger residuals as long as the percentage of data pollution is no more than 50% [43].

3.2. MTLTS Method

As stated above, the MTLS method is susceptible to the outliers, but it considers the random errors that exist in all variables. Even though the LTS method is robust, it cannot express the local geometry feature information of the complicated model. Therefore, we propose a MTLTS method, in which the TLTS method (a combination of the TLS and LTS) is employed to acquire the fitting coefficients of influence domain (Figure 3).
For the proposed algorithm, the TLTS method is employed to determine the local optimal parameter vector in the influence domain. Let k + 1 < n, where n and k + 1 are the numbers of the nodes in the whole parameter domain and in the influence domain, respectively. For an arbitrary influence domain, there are C k + 1 P subsamples based on the TLTS method.
For each subsample, the SVD based TLS method is utilized for obtaining the regression coefficients βt (βtβ, t = 1, 2, …, C k + 1 P ). The function model is defined as
A X = B A =   A 1 + Δ A   B = B 1 + Δ B
where A1 and B1 are the true values, A and B represent the actual measured values, and the errors between them are ∆A and ∆B.
An augmented matrix C is made for the subsample and the SVD of C is described by
C   : = W [ A         B ] = U Σ V T
where W = diag(w(xx1), w(xx2), …, w(xxP)) is the weight matrix, the right part singular matrix V = [V1, V2, …, VP+1], VP+1 = [v1, P+1, v2, P+1, …, vP+1, P+1]T, and the singular matrix Σ = diag(σ1, σ2, …, σP+1).
If σPσP+1, the solution of TLS is unique, it can be gained by the following formula [24,25]
β t = 1 v P + 1 , P + 1 [ v 1 , P + 1 v 2 , P + 1   v P , P + 1 ]
The squared residuals can be obtained by the local coefficient vector and defined in ascending order as r t 2 = [ d 1 2 , d 2 2 , …, d j 2 , …, d P 2 ], (0 ≤ d 1 2 ≤ … ≤ d j 2 ≤ … ≤ d P 2 ). The TLTS method is different from the traditional LTS method as it takes into account the random errors that exist in all variables, in which the distance d j 2 (j ∈ [1, 2, …, P]) is the squared residual in the orthogonal direction. On this occasion, the sum of the squared residual of the smallest h-subset of each subsample is defined as
S t = j = 1 h d j 2
The coefficient matrix β = [β1, β2, …, β C k + 1 P ] can be obtained by repeating calculations. The TLTS estimation is used to determine the corresponding optimal coefficient vector by finding the smallest h-subset. The estimation is defined as
β T L T S = arg min β t β S = arg min β t β { S 1 , S 2 ,   , S t ,   , S C k + 1 P }
Move the fixed point throughout the domain and repeat the previous steps, in which the estimation for each point is independent. Then, we get the reconstructed curve or surface. In this paper, we set h = [(k + p + 2)/2].

4. Case Study

To validate the data fitting performance of the MTLTS method, numerical simulations as well as experimental examples are given in this section. In the numerical simulation, the tested data is simulated by artificially adding random errors and outliers. The spline weight function introduced is applied to all cases.

4.1. Case 1

Take the function
y = 1.1 ( 1 x + 2 x 2 ) e x 2 4.5  
as an example. A uniformly distributed set of nodes (xj, yj) from the Equation (14) is first selected. Then, get the data (xjm, yjm) by adding outliers (0, Δyi) and the random errors (δj, ɛj) to (xj, yj), where the random errors obey the normal distribution with a mean value of zero.
The sum of absolute differences between the fitting points and the theoretical points
s = j = 1 n | y j y j n |
is employed in evaluating their performance where yjn and yj are the fitting points and theoretical points.
Let n = 201 and r = (xjm(201) − xjm(1)) × 3/100 in Case 1, in which xjm(1) = −5 and xjm(201) = 5. Figure 4 presents the fitting curves obtained by the MLS, MTLS and MTLTS. The summation of the differences for these methods under different conditions are shown in Table 1, respectively. These points marked in Figure 4 are outliers. In the cases of this paper, we provided relatively more outliers in the whole domain to verify the proposed algorithm.
The fitting accuracy also can be evaluated by the Root Mean Square (RMS) value. The results are still consistent with the sum of absolute differences, as shown in Table 2. In order to avoid repetition, the RMS values are not placed in the other cases.

4.2. Case 2

Take the function
z = ( x 2 y 2 ) / 10
and define the square area Ω = [−2.4, 2.4] × [−2.4, 2.4] as the definition domain of this case. Let n = 1681 and r = (xjm(41) + yjm(41)) × 7/100 in Case 2, in which xjm(41) = 2.4 and yjm(41) = 2.4. The surface reconstruction is evaluated by using
s = j = 1 n | z j z j n |
where zjn and zj are fitting points and theoretical points. Following the same approach described in Case 1, the fitting results of three methods under different random conditions are shown in Table 3 and Figure 5, respectively.
From Figure 4 and Figure 5, we know that MLS and MTLS are not robust model reconstruction methods. For these two methods, outliers have a great influence on the estimation of nearby fitting points and even lead to distortion of the results. In comparison, the sum of differences of the MTLTS method is much smaller in the presence of the contaminated data. To validate the fitting accuracy of the MTLTS method when there are no abnormal points in the discrete data, we still take the curve function to get the data in contrast to Case 1. As shown in Figure 6, the curves reconstructed by the three methods provide good approximation characteristics. However, the comparison of the result listed in Table 4 and Figure 7 shows that the fitting differences of MTLTS method are obviously lower than the other two methods.
To obtain the corresponding CPU-times amongst IMTLS, MLS, and MTLS, the Case I is taken as an example and MATLAB is used to test the computation load of these algorithms. All procedures are conducted on a PC with Intel(R) CoreTM i7 2.7/2.9 GHz 8 RAM (Santa Clara County, CA, USA). The results are shown in Table 5.

4.3. Case 3

To further verify the performance of MTLTS method, it is also applied to fit the measurement data obtained by a precision measurement platform, as shown in Figure 8.
The measurement system is based on the LM50 laser-interferometric gauging probe and performs measurement of the surface profile of the processed workpiece. The employed point-contact ruby probe has a low contact pressure while offering a high measurement accuracy. At the planned layout point, the surface profile data of the workpiece is obtained by the X-axis and LM50, respectively. X-axis has a repetitive positioning error of about 41 nm and the sensor has a repetitive error of around 127 nm. As shown in Figure 9, the measurement data was obtained experimentally by measuring the profile of an optical flat, which has a peak-to-valley (PV) value of 31 nm.
The measurement length is 90 mm and the total number of sampling points is 91. MLS, MTLS, and MTLTS method are applied to process the experimental data and TLTS method is used for linear regression. Then, the corresponding straightness values are used to verify the performances of these methods. The fitting results of the MTLTS with different C k + 1 P parameters are shown in Figure 10. The straightness values obtained by the three reconstruction methods are listed in Table 6.
As shown in Table 6, MLS, MTLS, and MTLTS with different C k + 1 P parameters are applied to fit the measurement data of the optical plat, and TLS and TLTS with different C k + 1 P parameters are used for linear regression. Figure 11 shows the variation trend of the straightness values when different curve fitting and linear regression methods are chosen.
As shown in Figure 11, MLS and MTLS method are both greatly influenced by the outliers. With the increase of P value of TLTS method, the results of evaluated straightness get worse, which also illustrates the robustness of TLTS method for linear regression. In comparison, the obtained straightness of MTLTS method is always closest to the standard value. Furthermore, with the increase of P value of MTLTS method (i.e., with the decrease of nodes for determining the local approximate coefficients within a single influence domain), the results of MTLTS method tend to be stable, which confirms the effectiveness of the proposed method.
The same measuring instrument is used to measure the generatrix of spherical surface, as shown in Figure 12.
The radius of the spherical surface is 254.0677 mm tested by Taylor Hobson PGI 1240 profilometer. The profile data are fitted by MLS, MTLS, and MTLTS respectively. The reconstructed data is processed for circular registration by the simulated annealing algorithm. Figure 13 shows the error graphs and the PV values of the three methods are obtained in Table 7.
As shown in Table 7, the PV value processed by the MTLTS method is significantly smaller than the other two methods. In order to verify the stability of the algorithm when different numbers of points of influence domain are eliminated, Figure 14 shows the PV value trend graph of the process. As the P value increases, the PV values gradually become stabilized.
The proposed MTLTS algorithm has combined the advantages of the MTLS and LTS method and involves outstanding characteristics. Although the measurement data has outliers, it is still able to reconstruct the curve or surface from the discrete data with high accuracy by applying the improved method. Furthermore, the comparison with another two numerical estimation methods represents that the accuracy and robustness of the MTLTS algorithm have been significantly enhanced whether there are outliers in the data or not.

5. Conclusions

In this study, a robust reconstruction algorithm for measurement data, based on the MTLS method, is presented by introducing the TLTS method to the influence domain for finding an optimal local parameter vector. We studied the algorithm from the perspective of calculation and theory. Owing to the construction principle of the algorithm, it does not only possess the property of acquiring the shape function with high order continuity and consistency under the basis function with low order. In addition, the robust algorithm overcomes the shortcoming of lacking robustness that is difficult to be solved for the traditional numerical estimation methods (MLS and MTLS). To verify the proposed method in terms of fitting performance, all three methods are employed for fitting the data generated by numerical simulation and experimental measurement. The results show that the MTLTS method has significant advantages over the MTLS and MLS method whether there are outliers or not, which proves the performance of this robust algorithm.

Author Contributions

The authors of this paper were T.G., C.H., D.T. and T.L. They proposed relevant methods and conducted experiments and data analysis. T.L. is project manager and put forward constructive suggestions in the algorithm. T.G. mainly perfected the algorithms. C.H. contributed the design of the hardware and formal analysis. D.T. participated in the design of software. All authors have read and agreed to the published version of the manuscript.

Funding

This work was supported by the National Natural Science Foundation of China (Grant No. 11572316 and 51605094), Anhui province science and technology major project (Grant No. 201903a07020019), the Fundamental Research Funds for the Central Universities (Grant No. WK2090050042), the Thousand Young Talents Program of China, and the Center for Micro and Nanoscale Research and Fabrication at the University of Science and Technology of China.

Conflicts of Interest

Authors declare no conflict of interest.

References

  1. Wang, B.Y. A local meshless method based on moving least squares and local radial basis functions. Eng. Anal. Bound. Elem. 2015, 50, 395–401. [Google Scholar] [CrossRef]
  2. Li, X.L.; Li, S.L. Improved complex variable moving least squares approximation for three-dimensional problems using boundary integral equations. Eng. Anal. Bound. Elem. 2017, 84, 25–34. [Google Scholar] [CrossRef]
  3. Li, X.L. Meshless Galerkin algorithms for boundary integral equations with moving least square approximations. Appl. Numer. Math. 2011, 61, 1237–1256. [Google Scholar] [CrossRef]
  4. Gu, T.Q.; Tu, Y.; Tang, D.W.; Lin, S.W.; Fang, B. A trimmed moving total least squares method for curve and surface fitting. Meas. Sci. Technol. 2020, 31, 045003. [Google Scholar] [CrossRef]
  5. Salehi, R.; Dehghan, M. A moving least square reproducing polynomial meshless method. Appl. Numer. Math. 2013, 69, 34–58. [Google Scholar] [CrossRef]
  6. Huang, Z.T.; Lei, D.; Huang, D.W.; Lin, J.; Han, Z. Boundary moving least square method for 2D elasticity problems. Eng. Anal. Bound. Elem. 2019, 106, 505–512. [Google Scholar] [CrossRef]
  7. Yeahwon, K.; Hohyung, R.; Sunmi, L.; Yeon, J.L. Joint Demosaicing and Denoising Based on Interchannel Nonlocal Mean Weighted Moving Least Squares Method. Sensors 2020, 20, 4697. [Google Scholar]
  8. Amirfakhrian, M.; Mafikandi, H. Approximation of parametric curves by Moving Least Squares method. Appl. Math. Comput. 2016, 283, 290–298. [Google Scholar] [CrossRef]
  9. Wang, H.Y. Concentration estimates for the moving least-square method in learning theory. J. Approx. Theory 2011, 163, 1125–1133. [Google Scholar] [CrossRef] [Green Version]
  10. Liu, W.; Wang, B.Y. A modified approximate method based on Gaussian radial basis functions. Eng. Anal. Bound. Elem. 2019, 100, 256–264. [Google Scholar] [CrossRef]
  11. Dabboura, E.; Sadat, H.; Prax, C. A moving least squares meshless method for solving the generalized Kuramoto-Sivashinsky equation. Alex. Eng. J. 2016, 55, 2783–2787. [Google Scholar] [CrossRef] [Green Version]
  12. Lee, I.K. Curve reconstruction from unorganized points. Comput. Aided Geom. Des. 2000, 17, 161–177. [Google Scholar] [CrossRef] [Green Version]
  13. Belytschko, T.; Lu, Y.Y.; Gu, L. Element-free Galerkin methods. Int. J. Numer. Meth. Eng. 1994, 37, 229–256. [Google Scholar] [CrossRef]
  14. Li, X.L.; Dong, H.Y. Analysis of the element-free Galerkin method for Signorini problems. Appl. Math. Comput. 2019, 346, 41–56. [Google Scholar] [CrossRef]
  15. Wang, Q.; Zhou, W.; Feng, Y.T.; Ma, G.; Cheng, Y.G.; Chang, X.L. An adaptive orthogonal improved interpolating moving least-square method and a new boundary element-free method. Appl. Math. Comput. 2019, 353, 347–370. [Google Scholar] [CrossRef]
  16. Wang, J.F.; Sun, F.X.; Cheng, Y.M.; Huang, A.X. Error estimates for the interpolating moving least-squares method. Appl. Math. Comput. 2014, 245, 321–342. [Google Scholar] [CrossRef]
  17. Yu, S.Y.; Peng, M.J.; Cheng, H.; Cheng, Y.M. The improved element-free Galerkin method for three-dimensional elastoplasticity problems. Eng. Anal. Bound. Elem. 2019, 104, 215–224. [Google Scholar] [CrossRef]
  18. Salehi, R.; Dehghan, M. A generalized moving least square reproducing kernel method. J. Comput. Appl. Math. 2013, 249, 120–132. [Google Scholar] [CrossRef]
  19. Shokri, A.; Bahmani, E. Direct meshless local Petrov–Galerkin (DMLPG) method for 2D complex Ginzburg–Landau equation. Eng. Anal. Bound. Elem. 2019, 100, 195–203. [Google Scholar] [CrossRef]
  20. Ren, H.P.; Cheng, J.; Huang, A.X. The complex variable interpolating moving least-squares method. Appl. Math. Comput. 2012, 219, 1724–1736. [Google Scholar] [CrossRef]
  21. Mirzaei, D. Analysis of moving least squares approximation revisited. J. Comput. Appl. Math. 2015, 282, 237–250. [Google Scholar] [CrossRef]
  22. Ding, S.J.; Jiang, W.P.; Shen, Z.J. Linear-regression models and algorithms based on the Total-Least-Squares principle. Geod. Geodyn. 2012, 3, 42–46. [Google Scholar]
  23. Keksel, A.; Ströer, F.; Seewig, J. Bayesian approach for circle fitting including prior knowledge. Surf. Topogr. Metrol. 2018, 6, 035002. [Google Scholar] [CrossRef]
  24. Fan, T.L.; Feng, H.Q.; Guo, G. Joint detection based on the total least squares. Procedia Comput. Sci. 2018, 131, 167–176. [Google Scholar] [CrossRef]
  25. Markovsky, I.; Huffel, S.V. Overview of total least-squares methods. Signal Process 2007, 87, 2283–2302. [Google Scholar] [CrossRef] [Green Version]
  26. Scitovski, R.; Ungar, Š.; Jukic, D. Approximating surfaces by moving total least squares method. Appl. Math. Comput. 1998, 93, 219–232. [Google Scholar] [CrossRef]
  27. Levin, D. Between moving least-squares and moving least- ℓ1. BIT Numer. Math. 2015, 55, 781–796. [Google Scholar] [CrossRef]
  28. Ohlídal, I.; Vohánka, J.; Čermák, M.; Franta, D. Combination of spectroscopic ellipsometry and spectroscopic reflectometry with including light scattering in the optical characterization of randomly rough silicon surfaces covered by native oxide layers. Surf. Topogr. Metrol. Prop. 2019, 7, 45004. [Google Scholar] [CrossRef]
  29. Onoz, B.; Oguz, B. Assessment of Outliers in Statistical Data Analysis. Integr. Technol. Environ. Monit. Inf. Prod. 2003, 23, 173–180. [Google Scholar]
  30. Wang, C.; Caja, J.; Gómez, E. Comparison of methods for outlier identification in surface characterization. Measurement 2018, 117, 312–325. [Google Scholar] [CrossRef]
  31. Zhang, L.; Cheng, X.; Wang, L. Ellipse-fitting algorithm and adaptive threshold to eliminate outliers. Surv. Rev. 2019, 366, 250–256. [Google Scholar] [CrossRef]
  32. Wen, W.; Hao, Z.F.; Yang, X.W. Robust least squares support vector machine based on recursive outlier elimination. Soft Comput. 2010, 14, 1241–1251. [Google Scholar] [CrossRef]
  33. Yang, S.Y.; Qin, H.B.; Liang, X.L.; Thomas, A.G. An Improved Unauthorized Unmanned Aerial Vehicle Detection Algorithm Using Radiofrequency-Based Statistical Fingerprint Analysis. Sensors 2019, 19, 274. [Google Scholar] [CrossRef] [PubMed] [Green Version]
  34. Wang, P.; Lu, Z.Z.; Tang, Z.C. Importance measure analysis with epistemic uncertainty and its moving least squares solution. Comput. Math. Appl. 2013, 66, 460–471. [Google Scholar] [CrossRef]
  35. Wang, Q.; Zhou, W.; Cheng, Y.G.; Ma, G.; Chang, X.L.; Miao, Y.; Chen, E. Regularized moving least-square method and regularized improved interpolating moving least-square method with nonsingular moment matrices. Appl. Math. Comput. 2018, 325, 120–145. [Google Scholar] [CrossRef]
  36. Taflanidis, A.A.; Cheung, S. Stochastic sampling using moving least squares response surface approximations. Probab. Eng. Mech. 2012, 28, 216–224. [Google Scholar] [CrossRef]
  37. Spandan, V.; Lohse, D.; de Tullio, M.D.; Verzicco, R. A fast moving least squares approximation with adaptive Lagrangian mesh refinement for large scale immersed boundary simulations. J. Comput. Phys. 2018, 375, 228–239. [Google Scholar] [CrossRef] [Green Version]
  38. Gu, T.Q.; Ji, S.J.; Lin, S.W.; Luo, T.Z. Curve and surface reconstruction method for measurement data. Measurement 2016, 78, 278–282. [Google Scholar] [CrossRef]
  39. Li, X.F.; Coyle, D.; Maguire, L.; McGinnity, T.M. A Least Trimmed Square Regression Method for Second Level fMRI Effective Connectivity Analysis. Neuroinformatics 2013, 11, 105–118. [Google Scholar] [CrossRef]
  40. Rousseeuw, P.J. Least Median of Squares Regression. J. Am. Stat. Assoc. 1984, 79, 871–880. [Google Scholar] [CrossRef]
  41. Roozbeh, M.; Babaie-Kafaki, S.; Sadigh, A.N. A heuristic approach to combat multicollinearity in least trimmed squares regression analysis. Appl. Math. Model. 2018, 57, 105–120. [Google Scholar] [CrossRef]
  42. Čížek, P. Reweighted least trimmed squares: An alternative to one-step estimators. Test 2013, 22, 514–533. [Google Scholar] [CrossRef] [Green Version]
  43. Mount, D.M.; Netanyahu, N.S.; Piatko, C.D.; Silverman, R.; Wu, A.Y. On the Least Trimmed Squares Estimator. Algorithmica 2014, 69, 148–183. [Google Scholar] [CrossRef]
Figure 1. The compact support of weight function.
Figure 1. The compact support of weight function.
Sensors 20 06449 g001
Figure 2. The definition way of weight function in the influence domain.
Figure 2. The definition way of weight function in the influence domain.
Sensors 20 06449 g002
Figure 3. Schematic graph of the Moving Total Least Trimmed Squares (MTLTS) method.
Figure 3. Schematic graph of the Moving Total Least Trimmed Squares (MTLTS) method.
Sensors 20 06449 g003
Figure 4. The fitting curves of three methods in Case 1.
Figure 4. The fitting curves of three methods in Case 1.
Sensors 20 06449 g004
Figure 5. The fitting and error surfaces of three methods in Case 2. (a1) The fitting surface of MLS; (a2) The error surface of MLS; (b1) The fitting surface of MTLS; (b2) The error surface of MTLS; (c1) The fitting surface of MTLTS; (c2) The error surface of MTLTS.
Figure 5. The fitting and error surfaces of three methods in Case 2. (a1) The fitting surface of MLS; (a2) The error surface of MLS; (b1) The fitting surface of MTLS; (b2) The error surface of MTLS; (c1) The fitting surface of MTLTS; (c2) The error surface of MTLTS.
Sensors 20 06449 g005
Figure 6. Fitting curves obtained by the Moving Least Squares (MLS), Moving Total Least Squares (MTLS), and MTLTS.
Figure 6. Fitting curves obtained by the Moving Least Squares (MLS), Moving Total Least Squares (MTLS), and MTLTS.
Sensors 20 06449 g006
Figure 7. The trend of the s values.
Figure 7. The trend of the s values.
Sensors 20 06449 g007
Figure 8. Precision measurement platform. (a) Measurement platform; (b) Schematic diagram.
Figure 8. Precision measurement platform. (a) Measurement platform; (b) Schematic diagram.
Sensors 20 06449 g008
Figure 9. The measurement experiment of optical flat. (a) Measurement for optical plat; (b) Measurement data of optical plat.
Figure 9. The measurement experiment of optical flat. (a) Measurement for optical plat; (b) Measurement data of optical plat.
Sensors 20 06449 g009
Figure 10. The fitting curves of MTLTS method. (a) MTLTS ( C k + 1 k ); (b) MTLTS ( C k + 1 k 1 ); (c) MTLTS ( C k + 1 k 2 ); (d) MTLTS ( C k + 1 k 3 ).
Figure 10. The fitting curves of MTLTS method. (a) MTLTS ( C k + 1 k ); (b) MTLTS ( C k + 1 k 1 ); (c) MTLTS ( C k + 1 k 2 ); (d) MTLTS ( C k + 1 k 3 ).
Sensors 20 06449 g010
Figure 11. The trend of the straightness value.
Figure 11. The trend of the straightness value.
Sensors 20 06449 g011
Figure 12. The measurement of the generatrix of spherical surface.
Figure 12. The measurement of the generatrix of spherical surface.
Sensors 20 06449 g012
Figure 13. The error graphs processed by three methods.
Figure 13. The error graphs processed by three methods.
Sensors 20 06449 g013
Figure 14. The PV value trend graph.
Figure 14. The PV value trend graph.
Sensors 20 06449 g014
Table 1. The fitting results by three methods in Case 1.
Table 1. The fitting results by three methods in Case 1.
Variances
δjɛjMLSMTLSMTLTS
0.0000010.0015.3612904.9420310.830636
0.000010.0015.3603574.9410880.829239
0.00010.0015.3602574.9408900.829474
0.0010.0015.3645364.9512530.844504
0.0010.00015.3614394.9449820.838307
0.0010.000015.3630664.9474940.838965
0.0010.0000015.3624504.9474060.839584
Table 2. The fitting results by three methods in Case 1.
Table 2. The fitting results by three methods in Case 1.
VarianceRMS
δjɛjMLSMTLSMTLTS
0.0000010.0010.0410890.0463970.005262
0.000010.0010.0410400.0462960.005187
0.00010.0010.0410220.0462940.005258
0.0010.0010.0410200.0462890.005256
0.0010.00010.0410110.0459150.005300
0.0010.000010.0409430.0461780.005124
0.0010.0000010.0410710.0464820.005138
Table 3. The fitting results by three methods in Case 2.
Table 3. The fitting results by three methods in Case 2.
Variances
δjɛjMLSMTLSMTLTS
0.0000010.0012.3805592.5951040.959207
0.000010.0012.3832012.6011610.960295
0.00010.0012.3856622.6025650.959230
0.0010.0012.4497162.6958861.025807
0.0010.00012.2694772.3731870.724637
0.0010.000012.2741742.5567560.735747
0.0010.0000012.2641682.3603440.721755
Table 4. The results of three methods for the data without outliers.
Table 4. The results of three methods for the data without outliers.
Variances
δjɛjMLSMTLSMTLTS
0.0000010.0011.4478040.8025160.793830
0.000010.0011.4471930.8018300.792997
0.00010.0011.4483050.8030130.794529
0.0010.0011.4520260.8119920.807419
0.0010.00011.4502940.8084050.802493
0.0010.000011.4480760.8066520.800710
0.0010.0000011.4497200.8078430.801968
Table 5. The corresponding CPU-times amongst MTLTS, MLS, and MTLS for Case I.
Table 5. The corresponding CPU-times amongst MTLTS, MLS, and MTLS for Case I.
Number of the Points The CPU-Times (s)
MLSMTLSMTLTS
2010.15600.01560.9516
5011.52880.03122.7300
10015.27280.07803.2136
200116.03690.24965.0388
Table 6. The straightness values of three methods (nm).
Table 6. The straightness values of three methods (nm).
Curve FittingMLSMTLSMTLTS
( C k + 1 k )
MTLTS
( C k + 1 k 1 )
MTLTS
( C k + 1 k 2 )
MTLTS
( C k + 1 k 3 )
Linear Regression
TLS503581448443431428
TLTS ( C k + 1 k )549633459442428427
TLTS ( C k + 1 k 1 )558642469445434430
TLTS ( C k + 1 k 2 )586670496453451437
TLTS ( C k + 1 k 3 )591675513464459439
Table 7. The peak-to-valley (PV) values of three methods.
Table 7. The peak-to-valley (PV) values of three methods.
MethodsPV Values (nm)
MLS3348
MTLS3392
MTLTS2705
Publisher’s Note: MDPI stays neutral with regard to jurisdictional claims in published maps and institutional affiliations.

Share and Cite

MDPI and ACS Style

Gu, T.; Hu, C.; Tang, D.; Luo, T. A Novel Reconstruction Method for Measurement Data Based on MTLS Algorithm. Sensors 2020, 20, 6449. https://doi.org/10.3390/s20226449

AMA Style

Gu T, Hu C, Tang D, Luo T. A Novel Reconstruction Method for Measurement Data Based on MTLS Algorithm. Sensors. 2020; 20(22):6449. https://doi.org/10.3390/s20226449

Chicago/Turabian Style

Gu, Tianqi, Chenjie Hu, Dawei Tang, and Tianzhi Luo. 2020. "A Novel Reconstruction Method for Measurement Data Based on MTLS Algorithm" Sensors 20, no. 22: 6449. https://doi.org/10.3390/s20226449

Note that from the first issue of 2016, this journal uses article numbers instead of page numbers. See further details here.

Article Metrics

Back to TopTop