Next Article in Journal
n-Derivations and (n,m)-Derivations of Lattices
Next Article in Special Issue
An Efficient Family of Optimal Eighth-Order Multiple Root Finders
Previous Article in Journal
A Hermite Polynomial Approach for Solving the SIR Model of Epidemics
 
 
Font Type:
Arial Georgia Verdana
Font Size:
Aa Aa Aa
Line Spacing:
Column Width:
Background:
Article

Hybrid Second Order Method for Orthogonal Projection onto Parametric Curve in n-Dimensional Euclidean Space

1
Data Science and Technology, North University of China, Taiyuan 030051, Shanxi, China
2
Department of Science, Taiyuan Institute of Technology, Taiyuan 030008, Shanxi, China
3
Center for Economic Research, Shandong University, Jinan 250100, Shandong, China
4
College of Data Science and Information Engineering, Guizhou Minzu University, Guiyang 550025, Guizhou, China
5
Graduate School, Guizhou Minzu University, Guiyang 550025, Guizhou, China
*
Authors to whom correspondence should be addressed.
These authors contributed equally to this work.
Mathematics 2018, 6(12), 306; https://doi.org/10.3390/math6120306
Submission received: 16 October 2018 / Revised: 25 November 2018 / Accepted: 28 November 2018 / Published: 5 December 2018
(This article belongs to the Special Issue Iterative Methods for Solving Nonlinear Equations and Systems)

Abstract

:
Orthogonal projection a point onto a parametric curve, three classic first order algorithms have been presented by Hartmann (1999), Hoschek, et al. (1993) and Hu, et al. (2000) (hereafter, H-H-H method). In this research, we give a proof of the approach’s first order convergence and its non-dependence on the initial value. For some special cases of divergence for the H-H-H method, we combine it with Newton’s second order method (hereafter, Newton’s method) to create the hybrid second order method for orthogonal projection onto parametric curve in an n-dimensional Euclidean space (hereafter, our method). Our method essentially utilizes hybrid iteration, so it converges faster than current methods with a second order convergence and remains independent from the initial value. We provide some numerical examples to confirm robustness and high efficiency of the method.

1. Introduction

In this research, we will discuss the minimum distance problem between a point and a parametric curve in an n-dimensional Euclidean space, and how to gain the closest point (footpoint) on the curve as well as its corresponding parameter, which is termed as the point projection or inversion problem of a parametric curve in an n-dimensional Euclidean space. It is an important issue in the themes such as geometric modeling, computer graphics, computer-aided geometry design (CAGD) and computer vision [1,2]. Both projection and inversion are fundamental for a series of techniques, for instance, the interactive selection of curves and surfaces [1,3], the curve fitting [1,3], reconstructing curves [2,4,5] and projecting a space curve onto a surface [6]. This vital technique is also used in the ICP (iterative closest point) method for shape registration [7].
The Newton-Raphson algorithm is deemed as the most classic one for orthogonal projection onto parametric curve and surface. Searching the root of a polynomial by a Newton-Raphson algorithm can be found in [8,9,10,11,12]. In order to solve the adaptive smoothing for the standard finite unconstrained minimax problems, Polak et al. [13] have presented a extended Newton’s algorithm where a new feedback precision-adjustment rule is used in their extended Newton’s algorithm. Once the Newton-Raphson method reaches its convergence, two advantages emerge and it converges very fast with high precision. However, the result relies heavily on a good guess of initial value in the neighborhood of the solution.
Meanwhile, the classic subdivision method consists of several procedures: Firstly, subdivide NURBS curve or surface into a set of Bézier sub-curves or patches and eliminate redundancy or unnecessary Bézier sub-curves or Bézier patches. Then, get the approximation candidate points. Finally, get the closest point through comparing the distances between the test point and candidate points. This technique is reflected in [1]. Using new exclusion criteria within the subdivision strategy, the robustness for the projection of points on NURBS curves and surfaces in [14] has been improved than that in [1], but this criterion is sometimes too critical. Zou et al. [15] use subdivision minimization techniques which rely on the convex hull characteristic of the Bernstein basis to impute the minimum distance between two point sets. They transform the problem into solving of n-dimensional nonlinear equations, where n variables could be represented as the tensor product Bernstein basis. Cohen et al. [16] develop a framework for implementing general successive subdivision schemes for nonuniform B-splines to generate the new vertices and the new knot vectors which are satisfied with derived polygon. Piegl et al. [17] repeatedly subdivide a NURBS surface into four quadrilateral patches and then project the test point onto the closest quadrilateral until it can find the parameter from the closest quadrilateral. Using multivariate rational functions, Elber et al. [11] construct a solver for a set of geometric constraints represented by inequalities. When the dimension of the solver is greater than zero, they subdivide the multivariate function(s) so as to bind the function values within a specified domain. Derived from [11] but with more efficiency, a hybrid parallel method in [18] exploits both the CPU and the GPU multi-core architectures to solve systems under multivariate constraints. Those GPU-based subdivision methods essentially exploit the parallelism inherent in the subdivision of multivariate polynomial. This geometric-based algorithm improves in performance compared to the existing subdivision-based CPU. Two blending schemes in [19] efficiently remove no-root domains, and hence greatly reduce the number of subdivisions. Through a simple linear combination of functions for a given system of nonlinear equations, no-root domain and searching out all control points for its Bernstein-Bézier basic with the same sign must be satisfied with the seek function. During the subdivision process, it can continuously create these kinds of functions to get rid of the no-root domain. As a result, van Sosin et al. [20] efficiently form various complex piecewise polynomial systems with zero or inequality constraints in zero-dimensional or one-dimensional solution spaces. Based on their own works [11,20], Bartoň et al. [21] propose a new solver to solve a non-constrained (piecewise) polynomial system. Two termination criteria are applied in the subdivision-based solver: the no-loop test and the single-component test. Once two termination criteria are satisfied, it then can get the domains which have a single monotone univariate solution. The advantage of these methods is that they can find all solutions, while their disadvantage is that they are computationally expensive and may need many subdivision steps.
The third classic methods for orthogonal projection onto parametric curve and surface are geometry methods. They are mainly classified into eight different types of geometry methods: tangent method [22,23], torus patch approximating method [24], circular or spherical clipping method [25,26], culling technique [27], root-finding problem with Bézier clipping [28,29], curvature information method [6,30], repeated knot insertion method [31] and hybrid geometry method [32]. Johnson et al. [22] use tangent cones to search for regions with satisfaction of distance extrema conditions and then to solve the minimum distance between a point and a curve, but it is not easy to construct tangent cones at any time. A torus patch approximatively approaches for point projection on surfaces in [24]. For the pure geometry method of a torus patch, it is difficult to achieve high precision of the final iterative parametric value. A circular clipping method can remove the curve parts outside a circle with the test point being the circle’s center, and the radius of the elimination circle will shrink until it satisfies the criteria to terminate [26]. Similar to the algorithm [26], a spherical clipping technique for computing the minimum distance with clamped B-spline surface is provided by [25]. A culling technique to remove superfluous curves and surfaces containing no projection from the given point is proposed in [27], which is in line with the idea in [1]. Using Newton’s method for the last step [1,25,26,27], the special case of non-convergence may happen. In view of the convex-hull property of Bernstein-Bézier representations, the problem to be solved can be formulated as a univariate root-finding problem. Given a C 1 parametric curve c ( t ) and a point p, the projection constraint problem can be formulated as a univariate root-finding problem c ( t ) , c ( t ) p = 0 with a metric induced by the Euclidean scalar product in R n . If the curve is parametrized by a (piece-wise) polynomial, then the fast root-finding schemes as a Bézier clipping [28,29] can be used. The only issue is the C 1 discontinuities that can be checked in a post-process. One advantage of these methods is that they do not need any initial guess on the parameter value. They adopt the key technology of degree reduction via clipping to yield a strip bounded of two quadratic polynomials. Curvature information is found for computing the minimum distance between a point and a parameter curve or surface in [6,30]. However, it needs to consider the second order derivative and the method [30] is not fit for n-dimensional Euclidean space. Hu et al. [6] have not proved the convergence of their two algorithms. Li et al. [33] have strictly proved convergence analysis for orthogonal projection onto planar parametric curve in [6]. Based on repeated knot insertion, Mørken et al. [31] exploit the relationship between a spline and its control polygon and then present a simple and efficient method to compute zeros of spline functions. Li et al. [32] present the hybrid second order algorithm which orthogonally projects onto parametric surface; it actually utilizes the composite technology and hence converges nicely with convergence order being 2. The geometric method can not only solve the problem of point orthogonal projecting onto parametric curve and surface but also compute the minimum distance between parametric curves and parametric surfaces. Li et al. [23] have used the tangent method to compute the intersection between two spatial curves. Based on the methods in [34,35], they have extended to compute the Hausdorff distance between two B-spline curves. Based on matching a surface patch from one model to the other model which is the corresponding nearby surface patch, an algorithm for solving the Hausdorff distance between two freeform surfaces is presented in Kim et al. [36], where a hierarchy of Coons patches and bilinear surfaces that approximate the NURBS surfaces with bounding volume is adopted. Of course, the common feature of geometric methods is that the ultimate solution accuracy is not very high. To sum up, these algorithms have been proposed to exploit diverse techniques such as Newton’s iterative method, solving polynomial equation roots methods, subdividing methods, geometry methods. A review of previous algorithms on point projection and inversion problem is obtained in [37].
More specifically, using the tangent line or tangent plane with first order geometric information, a classical simple and efficient first order algorithm which orthogonally project onto parametric curve and surface is proposed in [38,39,40] (H-H-H method). However, the proof of the convergence for the H-H-H method can not be found in this literature. In this research, we try to give two contributions. Firstly, we give proof that the algorithm is first order convergent and it does not depend on the initial value. We then provide some numerical examples to show its high convergence rate. Secondly, for several special cases where the H-H-H method is not convergent, there are two methods (Newton’s method and the H-H-H method) to combine our method. If the H-H-H method’s iterative parametric value is satisfied with the convergence condition of the Newton’s method, we then go to Newton’s method to increase the convergence process. Otherwise, we go on the H-H-H method until its iterative parametric value is satisfied with the convergence condition of the Newton’s method, and we then turn to it as above. This algorithm not only ensures the robustness of convergence, but also improves the convergence rate. Our hybrid method can go faster than the existing methods and ensures the independence to the initial value. Some numerical examples verify our conclusion.
The rest of this paper is arranged as follows. In Section 2, convergence analysis of the H-H-H method is presented. In Section 3, for several special cases where the H-H-H method is not convergent, an improved our method is provided. Convergence analysis for our method is also provided in this section. In Section 4, some numerical examples for our method are verified. In Section 5, conclusions are provided.

2. Convergence Analysis of the H-H-H Method

In this part, we will prove that the algorithm defined by Equations (2) or (3) is of first order convergence and its convergence does not rely on the initial value. Suppose a C 2 curve c ( t ) = ( f 1 ( t ) , f 2 ( t ) , , f n ( t ) ) in an n-dimensional Euclidean space R n ( n 2 ) and a test point p = ( p 1 , p 2 , , p n ) . The first order geometric method to compute the footpoint q of test point p can be implemented as below. Projecting test point p onto the tangent line of the parametric curve c ( t ) in an n-dimensional Euclidean space at t = t m gets a point q determined by c ( t m ) and its derivative c ( t m ) . The footpoint can be approximated as
q = c ( t m ) + Δ t c ( t m ) .
Then,
Δ t = c ( t m ) , p c ( t m ) c ( t m ) , c ( t m ) ,
where x , y is the scalar product of vectors x , y R n . Equation (2) can also be expressed as
K 1 ( t m ) = t m + c ( t m ) , p c ( t m ) c ( t m ) , c ( t m ) .
Let t m K 1 ( t m ) , and repeatedly iterate the above process until K 1 ( t m ) t m is less than an error tolerance ε . This method is addressed as H-H-H method [38,39,40]. Furthermore, convergence of this method will not depend on the choice of the initial value. According to many of our test experiments, when the iterative parametric value approaches the target parametric value α , the iteration step size becomes smaller and smaller, while the corresponding number of iterations becomes bigger and bigger.
Theorem 1.
The convergence order of the method defined by Equations (2) or (3) is one, and its convergence does not depend on the initial value.
Proof. 
We adopt the numerical analysis method which is equivalent to those in the literature [41,42]. Firstly, we deduce the expression of footpoint q. Suppose that parameter curve c ( t ) is a C 2 curve in an n-dimensional Euclidean space R n ( n 2 ) , where the corresponding projecting point with parameter α is orthogonal projecting of the test point p = ( p 1 , p 2 , , p n ) onto the parametric curve c ( t ) . It is easy to indicate a relational expression
p h , n = 0 ,
where h = c ( α ) and tangent vector n = c ( α ) . In order to solve the intersection (footpoint q) between the tangent line, which goes through the parametric curve c ( t ) at t = t m , and the perpendicular line, which is determined by the test point p, we try to express the equation of the tangent line as:
x = c ( t m ) + c ( t m ) · s ,
where x = ( x 1 , x 2 , , x n ) and s is a parameter. In addition, the vector of line segment both going through the test point p and the point c ( t m ) will be
y = p x ,
where y = ( y 1 , y 2 , , y n ) . Because the vector (6) and the tangent vector c ( t m ) of Equation (5) are orthogonal to each other, the current parameter value s of Equation (5) is
s 0 = p c ( t m ) , c ( t m ) c ( t m ) , c ( t m ) .
Substituting (7) into (5), we have
q = c ( t m ) + c ( t m ) · s 0 .
Thus, the footpoint q = ( q 1 , q 2 , , q n ) is determined by Equation (8).
Secondly, we deduce that the convergence order of the method defined by (2) or (3) is first order convergent. Our proof method absorbs the idea of [41,42]. Substituting (8) into (2), and simplifying, we get the relationship,
Δ t = p c ( t m ) , c ( t m ) c ( t m ) , c ( t m ) .
Using Taylor’s expansion, we get
c ( t m ) = B 0 + B 1 e m + B 2 e m 2 + o ( e m 3 ) ,
c ( t m ) = B 1 + 2 B 2 e m + o ( e m 2 ) ,
where e m = t m α , and B i = ( 1 / i ! ) c ( i ) ( α ) , i = 0 , 1 , 2 , From (10) and (11) and combining with (4), the numerator of Equation (9) can be transformed into the following one:
p c ( t m ) , c ( t m ) = L 1 e m + L 2 e m 2 + o ( e m 3 ) ,
where L 1 = 2 p B 0 , B 2 B 1 , B 1 , L 2 = 3 B 1 , B 2 . By (11), the denominator of Equation (9) can be changed as follows:
c ( t m ) , c ( t m ) = M 1 + M 2 e m + M 3 e m 2 + o ( e m 3 ) ,
where M 1 = B 1 , B 1 , M 2 = 4 B 1 , B 2 , M 3 = 4 B 2 , B 2 . Substituting Equations (12) and (13) into the right-hand side of Equation (9), we get
Δ t = p c ( t m ) , c ( t m ) c ( t m ) , c ( t m ) = L 1 e m + L 2 e m 2 + o ( e m 3 ) M 1 + M 2 e m + M 3 e m 2 + o ( e m 3 ) .
Using Taylor’s expansion by Maple 18, and through simplification, we get
K 1 ( t m ) = α + ( L 1 M 1 + 1 ) e m + L 2 M 1 L 1 M 2 M 1 2 e m 2 + o ( e m 3 ) , = α + ( L 1 M 1 + 1 ) e m + o ( e m 2 ) , = α + C 0 e m + o ( e m 2 ) ,
where the symbol C 0 is the coefficient of the first order error e m of Equation (15). The result implies the iterative Equations (2) or (3) is of first order convergence.
Now, we try to interpret that Equations (2) or (3) do not depend on the initial value.
Our proof method absorbs the idea of references [43,44]. Without loss of generality, we only prove that convergence of Equations (2) or (3) does not depend on the initial value in two-dimensional case. As to convergence of Equations (2) or (3) not being dependent on the initial value in general n-dimensional Euclidean space case, it is completely equivalent to the two-dimensional case.
Firstly, we interpret Figure 1. For a horizontal axis t, there are two points are on the planar parametric curve c ( t ) . For the first point c ( t m ) on the horizontal axis, the test point p orthogonal projects it onto the planar parametric curve c ( t ) and yields the second point and its corresponding parameter value α on the horizontal axis. Then, by the iterative methods (2) or (3), the line segment connected by the point p and the point c ( α ) is perpendicular to the tangent line of the planar parametric curve c ( t ) at t = α . The footpoint q is determined by the tangent line of the planar parametric curve c ( t ) through the point c ( t m ) . Evidently, the parametric value t m + 1 of footpoint q can be used as the next iterative value. M is the corresponding parametric value of the middle point of the point c ( t m ) and the footpoint q.
Secondly, we prove the argument whose convergence of Equations (2) or (3) does not depend on the initial value. It is easy to know that t denotes the corresponding parameter for the first dimensional of the planar parametric curve on the two-dimensional plane. When the iterative Equations (2) or (3) start to run, we suppose that the iterative parameter value is satisfied with the inequality relationship t m < α and the corresponding parameter of the footpoint q is t m + 1 , as shown in Figure 1. The middle point of two points ( t m + 1 , 0 ) and ( t m , 0 ) is ( M , 0 ) , i.e., M = t m + t m + 1 2 , and, because of 0 < Δ t = t m + 1 t m , then there exists an inequality t m < M < α . Equivalently, t m α < t m + 1 α < α t m = ( t m α ) , which can be expressed as | e m + 1 | < | e m | , where e m = t m α . If t m > α , we can get the same result through the same method. Thus, an iterative error expression | e m + 1 | < | e m | in a two-dimensional plane is demonstrated. Thus, it is known that convergence of the iterative Equations (2) or (3) does not depend on the initial value in two-dimensional planes (see Figure 1). Furthermore, we could get the argument that convergence of the iterative Equations (2) or (3) does not depend on the initial value in an n-dimensional Euclidean space. The proof is completed. □

3. The Improved Algorithm

3.1. Counterexamples

In Section 2, convergence of the H-H-H method does not depend on the initial value. For special cases with non-convergence by the H-H-H method, we then enumerate nine counterexamples.
Counterexample 1.
There are a parametric curve c ( t ) = ( t , 1 + t 2 ) and a test point p = ( 0 , 0 ) . The projection point and parametric value of the test point p are ( 0 , 1 ) and α = 0 , respectively. As to many initial values, the H-H-H method fails to converge to α. When the initial values are t = 3 , 2 , 1.5 , 1.5 , 2 , 3 , respectively, there repeatedly appear alternating oscillatory iteration values of 0.412415429665, −0.412415429665. Furthermore, for a parametric curve c ( t ) = ( t , 1 + a 1 t 2 + a 2 t 4 + a 3 t 6 + a 4 t 8 + a 5 t 10 ) , a 1 0 , a 2 0 , a 3 0 , a 4 0 , a 5 0 , about p = ( 0 , 0 ) and many initial values, the H-H-H method fails to converge to α (see Figure 2).
Counterexample 2.
There are a parametric curve c ( t ) = ( t , t 2 , t 4 , t 6 , 1 + t 2 + t 4 + t 6 + t 8 ) and a test point p = ( 0 , 0 , 0 , 0 , 0 ) . The projection point and parametric value of the test point p are ( 0 , 0 , 0 , 0 , 1 ) and α = 0, respectively. For any initial value, the H-H-H method fails to converge to α. When the initial values are t = 5 , 4 , 3 , 2 , 1 , 1 , 2 , 3 , 4 , 5 , respectively, there repeatedly appear alternating oscillatory iteration values of 0.304949569175, −0.304949569175. Furthermore, for a parametric curve c ( t ) = ( a 0 t , a 1 t 2 , a 2 t 4 , a 3 t 6 , 1 + a 4 t 2 + a 5 t 4 + a 6 t 6 + a 7 t 8 + a 8 t 10 + a 9 t 28 ) , a 0 0 , a 1 0 , a 2 0 , a 3 0 , a 4 0 , a 5 0 , a 6 0 , a 7 0 , a 8 0 , a 9 0 , about point p = ( 0 , 0 , 0 , 0 , 0 ) and any initial value, the H-H-H method fails to converge to α.
Counterexample 3.
There are a parametric curve c ( t ) = ( t , sin ( t ) ) , t [ 0 , 3 ] and a test point p = ( 4 , 9 ) . The projection point and parametric value of the test point p are ( 1.842576 , 0.9632946 ) and α = 1.842576 , respectively. For point p and any initial value, the H-H-H method fails to converge to α. When the initial values are t = 5 , 4 , 3 , 2 , 1 , 1 , 2 , 3 , 4 , 5 , respectively, there repeatedly appear alternating oscillatory iteration values of 2.165320, 0.0778704, 6.505971, 9.609789. In addition, for a parametric curve c ( t ) = ( t , sin ( a t ) ) , a 0 , for any test point p and any initial value, the H-H-H method fails to converge to α (see Figure 3).
Counterexample 4.
There are a parametric curve c ( t ) = ( t , cos ( t ) ) , t [ 0 , 3 ] and a test point p = ( 2 , 6 ) . The projection point and parametric value of the test point p are ( 0.3354892 , 0.9442493 ) and α = 0.3354892 , respectively. For test point p and any initial value, the H-H-H method fails to converge to α. When the initial value is t = 5 , alternating oscillatory iteration values of 5.18741299662, 3.59425803253, −0.507188248308, 1.6901041247, 3.82746208506 repeatedly appear. When the initial value is t = 2, very irregular oscillatory iteration values of 0.652526561595, −0.720371663877, −2.39555359952, 0.365881194752, 2.06880954777, 3.18725085474, 1.71447110647, etc. appear In addition, for a parametric curve c ( t ) = ( t , cos ( a t ) ) , a 0 , for any test point p and any initial value, the H-H-H method fails to converge to α (see Figure 4).
Counterexample 5.
There are a parametric curve c ( t ) = ( t , t , t , t , sin ( t ) ) , t [ 6 , 9 ] and a test point p = ( 3 , 5 , 7 , 9 , 11 ) . The projection point and parametric value of the test point p are (7.310786, 7.310786, 7.310786, 7.310786, 0.8560612) and α = 7.310786 , respectively. For point p and any initial value, the H-H-H method fails to converge to α. When the initial values are t = 9 , 7 , 5 , 6 , 8 , respectively, there repeatedly appear alternating oscillatory iteration values of 7.24999006346, 6.37363460615. In addition, for a parametric curve c ( t ) = ( t , t , t , t , sin ( a t ) ) , t [ 6 , 9 ] , a 0 with a test point p = ( 3 , 5 , 7 , 9 , 11 ) , for any initial value, the H-H-H method fails to converge to α.
Counterexample 6.
There are a parametric curve c ( t ) = ( t , t , t , t , cos ( t ) ) , t [ 4 , 8 ] and a test point p = ( 2 , 4 , 6 , 8 , 10 ) . The projection point and parametric value of the test point p are (5.883406, 5.883406, 5.883406, 5.883406, 0.9211469) and α = 5.883406 , respectively. For point p and any initial value, the H-H-H method fails to converge to α. When the initial values are t = , 4 , 3 , 2 , 4 , 5 , 6 , 7 , respectively, there repeatedly appear alternating oscillatory iteration values of 4.17182145828, 7.80116702003. In addition, about a parametric curve c ( t ) = ( t , t , t , t , cos ( a t ) ) , t [ 4 , 8 ] , a 0 with a point p = ( 2 , 4 , 6 , 8 , 10 ) , for any initial value, the H-H-H method fails to converge. The non-convergence explanation of the three counterexamples below are similar to the preceding six ones and omitted to save space.
Counterexample 7.
There are a parametric curve c ( t ) = ( t 4 + 2 t 2 + 1 , t 2 + 1 , t 4 + 2 , t 2 , 3 t 6 + t 4 + 2 t 2 ) in five-dimensional Euclidean space and a test point p = ( 0 , 0 , 0 , 0 , 0 ) . The projection point and parametric value of the test point p are ( 1 , 1 , 2 , 0 , 0 ) and α = 0 , respectively. For any initial value t 0 , the H-H-H method fails to converge. We also test many other examples, such as when parametric curve is completely symmetrical and the point is on the symmetrical axis of parametric curve. For any initial value t 0 , the same results remain.
Counterexample 8.
There are a parametric curve c ( t ) = ( t ,sin ( t ) , t ,sin ( t ) , sin ( t ) ) , t [ 5 , 5 ] in five-dimensional Euclidean space and a test point p = ( 3 , 4 , 5 , 6 , 7 ) . The corresponding orthogonal projection parametric value α are −3.493548, −2.280571, 1.875969, 4.791677, respectively. For any initial value t 0 , the H-H-H method fails to converge.
Counterexample 9.
There is a parametric curve c ( t ) = (sin ( t ),cos ( t ) , t , sin ( t ) ,cos ( t ) ) , t [ 5 , 5 ] in five-dimensional Euclidean space and a test point p = ( 3 , 4 , 5 , 6 , 7 ) . The corresponding orthogonal projection parametric value α are −4.833375, −3.058735, 0.9730030, 3.738442, respectively. For any initial value t 0 , the H-H-H method fails to converge.

3.2. The Improved Algorithm

Due to the H-H-H method’s non-convergence for some special cases, the improved algorithm is presented to ensure the converge for any parametric curve, test point and initial value. The most classic Newton’s method can be expressed as
t m + 1 = t m f ( t m ) f ( t m ) ,
where f ( t ) = < T 1 , V 1 > = 0 , T 1 = c ( t ) , V 1 = p c ( t ) . It converges faster than the H-H-H method. However, the convergence of this depends on the chosen initial value. Only when the local convergence condition for the Newton’s method is satisfied, the method can acquire high effectiveness. In order to improve the robustness and rate of convergence, based on the the H-H-H method, our method is proposed. Combining the respective advantage of their two methods, if the iterative parametric value of the H-H-H method is satisfied with the convergence condition of the Newton’s method, we then go to the method to increase the convergence process. Otherwise, we continue the H-H-H method until it can generate iterative parametric value while satisfying the convergence condition by the Newton’s method, and we then go to the iterative process mentioned above. Thus, we run to the end of the whole process. The procedure not only ensures the robustness of convergence, but also improves the convergence rate. Using a hybrid strategy, our method is faster than current methods and independent from the initial value. Some numerical examples verify our conclusion. Our method can be realized as follows (see Figure 5).
Hybrid second order method
Input: Initial iterative value t 0 , test point p and parametric curve c ( t ) in an n-dimensional Euclidean space.
Output: The corresponding parameter α determined by orthogonal projection point.
Step 1. 
Initial iterative parametric value t 0 is input.
Step 2. 
Using the iterative Equation (3), calculate the parametric value K 1 ( t 0 ) , and update K 1 ( t 0 ) to t 1 , namely, t 1 = K 1 ( t 0 ) .
Step 3. 
Determine whether absolute value of difference between the current t 0 and the new t 1 is near 0. If so, this algorithm is ended.
Step 4. 
Substitute the new t 1 into f ( t ) f ( t ) f ( t ) 2 , determine if f ( t 1 ) f ( t 1 ) f ( t 1 ) 2 < 1 .
If ( f ( t 1 ) f ( t 1 ) f ( t 1 ) 2 < 1 ) {
Using Newton’s iterative Equation (16), compute t 0 = t 1 f ( t 1 ) f ( t 1 ) until absolute value of difference between the current t 1 and the new t 0 is near 0; then, this algorithm ends.
}
Else {
  turn to Step 2.
}
Remark 1.
Firstly, a geometric illustration of our method in Figure 5 would be presented. Figure 5a illustrates the second step of our method where the next iterative parameter value t m + 1 = K 1 ( t m ) = t m + c ( t m ) , p c ( t m ) c ( t m ) , c ( t m ) is determined by the iterative Equation (3). During the iterative process, the step Δ t will become smaller and smaller. Thus, the next iterative parameter value t m + 1 comes close to parameter value t m but far from the footpoint q. If the third step of our method is not over, then our method goes into the fourth step. Figure 5b is judging condition of a fixed point theorem of the fourth step of our method. If T = f ( t ) f ( t ) f ( t ) 2 < 1 , then it turns to the Newton’s method in Figure 5c until it runs to the end of the whole process of Newton’s second order iteration; otherwise, it goes to the second step in Figure 5a.
Secondly, we give an interpretation for the singularity case of the iterative Equation (16). As to some special cases where the H-H-H method is not convergent in Section 3.1, our method still converges. We test many examples for arbitrary initial value, arbitrary test point and arbitrary parametric curve and find that our method remains more robust to converge than the H-H-H method. If the first order derivative f ( t m ) of the iterative Equation (16) develops into 0, i.e., f ( t m ) = 0 about some non-negative integer m, we use a perturbed method to solve the special problem, which adopts the idea in [23,45]. Namely, the function f ( t m ) = 0 could be increased by a very small positive number ε, i.e., f ( t m ) = f ( t m ) + ε , and then the iteration by Equation (16) is continued in order to calculate the parameter value. On the other hand, if the curve can be parametrized by a (piece-wise) polynomial, then the fast root-finding schemes such as Bézier clipping [28,29] are efficient ones. The only issue is the C 1 discontinuities that can be checked in a post-process. One then does not need any initial guess on the parameter value.
Thirdly, if the curve is only C 0 continuous, and the closest point can be exactly such a point, then the derivative is not well defined and our method may fail to find such a point. Namely, there are singular points on the parametric curve. We adopt the following technique to solve the problem of singularity. We use the methods [46,47,48] to find all singular points on the parametric curve and the corresponding parametric value of each singular point as many as possible. Then, the hybrid second order method comes into work. If the current iterative parametric value t m is the corresponding parametric value of a singular point, we make a very small perturbation ε to the current iterative parametric value t m , i.e., t m = t m + ε . The purpose of this behavior is to enable the hybrid second order method to run normally. Then, from all candidate points (singular points and orthogonal projection points), a corresponding point is selected so that the distance between the corresponding point and the test point is the minimum one. When the entire program terminates, the minimum distance and its corresponding parameter value are found.

3.3. Convergence Analysis of the Improved Algorithm

In this subsection, we prove the convergence analysis of our method.
Theorem 2.
In Reference [49] (Fixed Point Theorem)
If ϕ ( x ) C [ c , d ] , ϕ ( x ) [ c , d ] for all x [ c , d ] ; furthermore, if ϕ ( x ) exists on ( c , d ) and a positive constant L < 1 exists with ϕ ( x ) L for all x ( c , d ) , then there exists exactly one fixed point in [ c , d ] .
In addition, if ϕ ( t ) = t f ( t ) f ( t ) , the corresponding fixed point theorem of Newton’s method is as follows:
Theorem 3.
Let f : c , d c , d be a differentiable function, if for all t c , d , there is
f ( t ) f ( t ) f 2 ( t ) < 1 .
Then, there is a fixed point l 0 c , d in Newton’s iteration expression (16) such that l 0 = l 0 f ( l 0 ) f ( l 0 ) . Meanwhile, the iteration sequence t m been from expression (16) can converge to the fixed point when t 0 c , d .
Theorem 4.
Our method is second order convergent.
Proof: 
Let α be a simple zero for a nonlinear function f ( t ) = < T 1 , V 1 > = 0 , where T 1 = c ( t ) , V 1 = p c ( t ) . Using Taylor’s expansion, we have
f ( t m ) = f ( α ) [ e m + b 2 e m 2 + b 3 e m 3 + o ( e m 4 ) ] ,
f ( t m ) = f ( α ) [ 2 b 2 e m + 3 b 3 e m 2 + o ( e m 3 ) ] ,
where b k = f ( k ) ( α ) k ! f ( α ) , k = 2 , 3 , , and e m = t m α . Combining with (15), we then have
y m = ϕ ( t m ) = t m f ( t m ) f ( t m ) = α + b 2 C 0 2 e m 2 + o ( e m 3 ) .
This means that the convergence order of our method is 2. The proof is completed. □
Theorem 5.
Convergence of our method does not depend on the initial value.
Proof. 
According to the description of our method, if the iterative parametric value of the H-H-H method is satisfied with the convergence condition of the Newton’s method, we then go to the Newton’s method. Otherwise, we steadily adopt the H-H-H method until its iterative parametric value is satisfied with the convergence condition of the Newton’s method, and we go to Newton’s method. Then, we run to the end of the whole process. Theorem 1 ensures that it does not depend on the initial value. If our method goes to the fourth step and if it is appropriate to the condition of the fixed point theorem (Theorem 3), Newton’s method is realized by our method. Then, the fourth step of our method being also independent of the initial value can be confirmed by Theorem 3. In brief, convergence of our method does not depend on the initial value via the whole algorithm execution process. The proof is completed. □

4. Numerical Experiments

In order to illustrate the superiority of our method to other algorithms, we provide five numerical examples to confirm its robustness and high efficiency. From Table 1, Table 2, Table 3, Table 4, Table 5, Table 6, Table 7, Table 8, Table 9, Table 10, Table 11, Table 12, Table 13 and Table 14, the iterative termination criteria is satisfied such that t m α < 10 17 a n d t m + 1 t n < 10 17 . All numerical results were computed through g++ in a Fedora Linux 8 environment. The approximate zero α reached up to the 17th decimal place is reflected. These results of our five examples are obtained from computer hardware configuration with T2080 1.73 GHz CPU and 2.5 GB memory.
Example 1.
There is a parametric curve c ( t ) = ( f 1 ( t ) , f 2 ( t ) , f 3 ( t ) ) = ( 6 t 7 + t 5 , 5 t 8 + 3 t 6 , 10 t 12 + 8 t 8 + 6 t 6 + 4 t 4 + 2 t 2 + 3 ) , t [ 2 , 2 ] in three-dimensional Euclidean space and a test point p = ( p 1 , p 2 , p 3 ) = ( 2.0 , 4.0 , 2.0 ) . Using our method, the corresponding orthogonal projection parametric value is α = 0.0 , the initial values t 0 are 0,2,4,5,6,8,9,10, respectively. For each initial value, the iteration process runs 10 times and then 10 different iteration times in nanoseconds, respectively. In Table 1, the average run time of our method for eight different initial values are 536,142, 77,622, 101,481, 119,165, 126,502, 142,393, 150,801, 156,413 nanoseconds, respectively. Finally, the overall average running time is 176,315 nanoseconds (see Figure 6). If test point p is (2.0, 2.0, 2.0), the corresponding orthogonal projection parametric value is α = 0.0 , we replicate the procedure using our method and report the results in Table 2. In Table 2, the average running time of our method for 8 different initial values are 627,996, 89,992, 119,241, 139,036, 148,269, 167,364, 167,364, 178,554 nanoseconds, respectively. Finally, the overall average running time is 205,228 nanoseconds (see Figure 7). However, for the above two cases, the H-H-H method does not converge for any initial iterative value.
Because of a singular point on the parametric curve, we have also added some pre-processing steps before our method. (1) Find the singular point (0,0,3) and the corresponding parametric value 0 by using the methods [21,46,47,48]. (2) Using our method, the orthogonal projection points of test points (2,4,2) and (2,2,2) and their corresponding parameter values 0 and 0 are calculated, respectively. (3) From all candidate points(singular point and orthogonal projection point), corresponding point is selected so that the distance between the corresponding point and the test point is the minimum one. In Figure 6, the blue point denotes singular point (0,0,3), which is also the orthogonal projecting point of the test point (2,4,2). This is the same for the blue point in Figure 7.
Example 2.
There is a spatial quartic quasi-rational Bézier curve c ( t ) = ( f 1 ( t ) , f 2 ( t ) , f 3 ( t ) ) = ( u ( t ) a ( t ) , v ( t ) a ( t ) , w ( t ) a ( t ) ) , where u ( t ) = 2 t 4 + 3 t 3 + 3 t 2 + 12 t + 1 , v ( t ) = 4 t 4 + 3 t 3 + 7 t 2 + 7 t + 21 , w ( t ) = 5 t 4 + t 3 + 9 t 2 + 11 t + 13 , a ( t ) = 4 t 4 + 8 t 3 + 17 t 2 + 15 t + 6 , t [ 2 , 2 ] and a test point p = ( p 1 , p 2 , p 3 ) = ( 1.0 , 3.0 , 5.0 ) . The corresponding orthogonal projection parametric value α are 1.4118250062741212 , 0.61917136491841674 , 0.059335038305820650 , 1.8493434997820080 , respectively. Using our method, the initial values t 0 are 2.4 , 2.1 , 2.0 , 1.8 , 1.6 , 1.2 , 1.0 , 0.8 , respectively. For each initial value, the iteration process runs 10 times and then 10 different iteration times in nanoseconds, respectively. From Table 3, the average running time of our method for eight different initial values are 85,344, 93,936, 79,424, 62,643, 54,482, 22,982, 25,654, 26,868 nanoseconds, respectively. Finally, the overall average running time is 56,417 nanoseconds (see Figure 8). If test point p is ( 2.0 , 4.0 , 8.0 ) , the corresponding orthogonal projection parametric value α are 1.2589948653798823 , 0.62724968160147096 , 0.14597283439336865 , 1.8584532894110559 , respectively. We firstly replicate the procedure using our method and report the results in Table 4. From Table 4, the average running time of our method for eight different initial iterative values are 101,436, 109,001, 95,061, 77,563, 62,366, 27,054, 29,587, 32,501 nanoseconds, respectively. Finally, the overall average running time is 66,821 nanoseconds (see Figure 9). We then replicate the procedure using the algorithm [26] and report the results in Table 5. From Table 5, the average running time of the algorithm [26] for eight different initial values are 619,772, 654,281, 584,653, 467,856, 384,393, 163,225, 183,257, 195,013 nanoseconds, respectively. Finally, the overall average running time is 406,556 nanoseconds. However, for the above two cases, the H-H-H method does not converge for any initial value.
Example 3.
There is a parametric curve c ( t ) = ( f 1 ( t ) , f 2 ( t ) , f 3 ( t ) , f 4 ( t ) , f 5 ( t ) ) = ( c o s ( t ) , s i n ( t ) , t , c o s ( t ) , s i n ( t ) ) , t [ 2 , 2 ] in five-dimensional Euclidean space and a test point p = ( p 1 , p 2 , p 3 , p 4 , p 5 ) = ( 3.0 , 4.0 , 5.0 , 6.0 , 7.0 ) . Using our method, the corresponding orthogonal projection parametric value is α = 1.1587403612284800 , the initial values t 0 are 10 , 8 , 6 , 4 , 4 , 8 , 12 , 16 , respectively. For each initial value, the iteration process runs 10 times and then 10 different iteration times in nanoseconds, respectively. In Table 6, the average running time of our method for eight different initial values are 391,013, 424,444, 391,092, 249,376, 115,617, 170,212, 179,465, 196,912 nanoseconds, respectively. Finally, the overall average running time is 264,766 nanoseconds. If test point p is ( 30.0 , 40.0 , 50.0 , 60.0 , 70.0 ) , the corresponding orthogonal projection parametric value α is 1.2352898417860202 . We then replicate the procedure using our method and report the results in Table 7. In Table 7, the average running time of our method for eight different initial values are 577,707 , 485,417 , 460,913 , 289,232 , 133,661 , 199,470 , 211,915 , 229,398 nanoseconds, respectively. Finally, the overall average running time is 323,464 nanoseconds. However, for the above parametric curve and many test points, the H-H-H method does not converge for any initial value.
Example 4.
(Reference to [6])There is a parametric curve c ( t ) = ( f 1 ( t ) , f 2 ( t ) ) = ( t 2 , s i n ( t ) ) , t [ 3 , 3 ] in two-dimensional Euclidean space and a test point p = ( p 1 , p 2 ) = ( 1.0 , 2.0 ) . The corresponding orthogonal projection parametric value is α = 1.1063055095030472 . Using our method, the initial values t 0 are 100 , 4 , 5 , 7 , 8 , 10 , 11 , 100 , respectively. For each initial value, the iteration process runs 10 times and then 10 different iteration times in nanoseconds, respectively. In Table 8, the average running time of our method for eight different initial iterative values are 62,816, 35,042, 27,648, 43,122, 21,625, 38,654, 21,518, 72,917 nanoseconds, respectively. Finally, the overall average running time is 40,418 nanoseconds (see Figure 10). Implementing the same procedure, the overall average running time given by the H-H-H method is 231,613 nanoseconds in Table 9, while the overall average running time given by the second order method [6] is 847,853 nanoseconds in Table 10. Thus, our method is faster than the H-H-H method [38,39,40] and the second order method [6].
Example 5.
(Reference to [6])There is a parametric curve c ( t ) = ( f 1 ( t ) , f 2 ( t ) ) = ( t , s i n ( t ) ) , t [ 3 , 3 ] in two-dimensional Euclidean space and a test point p = ( p 1 , p 2 ) = ( 1.0 , 2.0 ) , the corresponding orthogonal projection parametric value is α = 1.2890239979093887 . Using our method, the initial values t 0 are 100 , 4 , 5 , 7 , 8 , 10 , 11 , 100 , respectively. For each initial value, the iteration process runs 10 times and then 10 different iteration time in nanoseconds, respectively. In Table 11, the average running time of our method for eight different initial values are 50,579 , 28,238 , 22,687 , 34,974 , 17,781 , 31,186 , 17,210 , 59,116 nanoseconds, respectively. Finally, the overall average running time is 32,721 nanoseconds (see Figure 11). We then replicate the procedure using the second order method [6] and report the results in Table 12. In Table 12, the average running time of the second order method [6] for 8 different initial values are 320,035 , 182,451 , 147,031 , 235,779 , 112,090 , 200,431 , 113,284 , 369,294 nanoseconds, respectively. Finally, the overall average running time is 210,049 nanoseconds. In addition, we compare the iterations by different methods where the NC denotes non-convergence in Table 13.
Remark 2.
From the results of five examples, the overall average running time of our method is 145.5 μs. From the results of Table 9, the overall average running time of the H-H-H method is 231.6 μs. From results of six examples in [26], the overall average running time of the algorithm [1] is 680.8 μs. From results of six examples in [26], the overall average running time of the algorithm [14] is 1270.8 μs. From results of Table 5, the overall average running time of the algorithm [26] is 406.6 μs. From results of Table 10 and Table 12, the overall average running time of the algorithm [6] is 528.9 μs. Table 14 displays time comparison for these algorithms. In short, the robustness and efficiency of our method are more superior to those of the existing algorithms [1,6,14,26,38,39,40].
Remark 3.
For general parametric curve containing the elementary functions, such as sin ( t ) , cos ( t ) , e t , ln t , arcsin t , arccos t , etc., it is very difficult to transform general parametric curve into Bézier-type curve. In contrast, our method can deal with the general parametric curve containing the elementary functions. Furthermore, the convergence of our method does not depend on the initial value. From Table 13, only the H-H-H method or the Newton’s method can not ensure convergence, while our method can ensure convergence. For multiple solutions of orthogonal projection, our approach works as follows:
(1) 
The parameter interval [ a , b ] of parametric curve c ( t ) is divided into M identical subintervals.
(2) 
An initial value is selected randomly in each interval.
(3) 
Using our method and using each initial parametric value, do iterations, respectively. Suppose that the iterative parametric values are α 1 , α 2 , …, α M , respectively.
(4) 
Calculate the local minimum distances d 1 , d 2 , , d M , where d i = p c ( α i ) .
(5) 
Seek the global minimum distance d = p c ( α ) from { p c ( a ) , d 1 , d 2 , , d M , p c ( b ) } .
If we are to solve all solutions as far as possible, we urge the positive integer M to be as large as possible.
We use Example 2 to illustrate how the procedure works, where, for t [ 2 , 2 ] , three parameter values are 1.4118250062741212 , 0.61917136491841674 , 1.8493434997820080 , respectively. It is easy to find that the projection point with the parameter value 0.61917136491841674 will be the one with minimum distance, whereas other projection points without these parameter values can not be the one with minimum distance. Thus, only the orthogonal projection point with minimum distance remains after the procedure to select multiple orthogonal projection points.
Remark 4.
We have done many test examples including five test examples. In the light of these test results, our method has good convergent properties for different initial values, namely, if initial value is t 0 , then the corresponding orthogonal projection parametric value α for the orthogonal projection point of the test point p is suitable for one inequality relationship
p c ( α ) , c ( α ) < 10 17 .
This indicates that the inequality relationship satisfies requirements of Equation (4). This shows that convergence of our method does not depend on the initial value. Furthermore, our method is robust and efficient, which is satisfied with the previous two of ten challenges proposed by [50].

5. Conclusions

This paper discusses the problem related to a point orthogonal projection onto a parametric curve in an n-dimensional Euclidean space on the basis of the H-H-H method, combining with a fixed point theorem of Newton’s method. Firstly, we run the H-H-H method. If the current iterative parametric value from the H-H-H method is satisfied with the convergence condition of the Newton’s method, we then go to the method to increase the convergence rate. Otherwise, we continue the H-H-H method to generate the iterative parametric value with satisfaction of the local convergence condition by the Newton’s method, and we then go to the previous step. Then, we run to the end of the whole process. The presented procedures ensure the convergence of our method and it does not depend on the initial value. Analysis of convergence demonstrates that our method is second order convergent. Some numerical examples confirm that our method is more efficient and performs better than other methods, such as the algorithms [1,6,14,26,38,39,40].
In this paper, our discussion focuses the algorithms in the parametric curve C 2 . For the parametric curve being C 0 , C 1 , piecewise curve or having singular points, we only present a preliminary idea. However, we have not completely implemented an algorithm for this kind of spline with low continuity. In the future, we will try to construct several brand new algorithms to handle the kind of spline with low continuity such that they can ensure very good robustness and efficiency. In addition, we also try to extend this idea to handle point orthogonal projecting onto implicit curves and implicit surfaces that include singularity points. Of course, the realization of these ideas is of great challenge. However, it is of great value and significance in practical engineering applications.

Author Contributions

The contribution of all the authors is the same. All of the authors team up to develop the current draft. J.L. is responsible for investigating, providing methodology, writing, reviewing and editing this work. X.L. is responsible for formal analysis, visualization, writing, reviewing and editing of this work. F.P. is responsible for software, algorithm and program implementation to this work. T.C. is responsible for validation of this work. L.W. is responsible for supervision of this work. L.H. is responsible for providing resources, writing, and the original draft of this work.

Funding

This research was funded by the National Natural Science Foundation of China Grant No. 61263034, the Feature Key Laboratory for Regular Institutions of Higher Education of Guizhou Province Grant No. 2016003, the Training Center for Network Security and Big Data Application of Guizhou Minzu University Grant No. 20161113006, the Key Laboratory of Advanced Manufacturing Technology, Ministry of Education, Guizhou University Grant No. 2018479, the National Bureau of Statistics Foundation Grant No. 2014LY011, the Key Laboratory of Pattern Recognition and Intelligent System of Construction Project of Guizhou Province Grant No. 20094002, the Information Processing and Pattern Recognition for Graduate Education Innovation Base of Guizhou Province, the Shandong Provincial Natural Science Foundation of China Grant No.ZR2016GM24, the Scientific and Technology Key Foundation of Taiyuan Institute of Technology Grant No. 2016LZ02, the Fund of National Social Science Grant No. 14XMZ001 and the Fund of the Chinese Ministry of Education Grant No. 15JZD034.

Acknowledgments

We take the opportunity to thank the anonymous reviewers for their thoughtful and meaningful comments.

Conflicts of Interest

The authors declare no conflict of interest.

References

  1. Ma, Y.L.; Hewitt, W.T. Point inversion and projection for NURBS curve and surface: Control polygon approach. Comput. Aided Geom. Des. 2003, 20, 79–99. [Google Scholar] [CrossRef]
  2. Piegl, L.; Tiller, W. Parametrization for surface fitting in reverse engineering. Comput.-Aided Des. 2001, 33, 593–603. [Google Scholar] [CrossRef]
  3. Yang, H.P.; Wang, W.P.; Sun, J.G. Control point adjustment for B-spline curve approximation. Comput.-Aided Des. 2004, 36, 639–652. [Google Scholar] [CrossRef] [Green Version]
  4. Johnson, D.E.; Cohen, E. A Framework for efficient minimum distance computations. In Proceedings of the IEEE Intemational Conference on Robotics & Automation, Leuven, Belgium, 20 May 1998. [Google Scholar]
  5. Pegna, J.; Wolter, F.E. Surface curve design by orthogonal projection of space curves onto free-form surfaces. J. Mech. Des. ASME Trans. 1996, 118, 45–52. [Google Scholar] [CrossRef]
  6. Hu, S.M.; Wallner, J. A second order algorithm for orthogonal projection onto curves and surfaces. Comput. Aided Geom. Des. 2005, 22, 251–260. [Google Scholar] [CrossRef]
  7. Besl, P.J.; McKay, N.D. A method for registration of 3-D shapes. IEEE Trans. Pattern Anal. Mach. Intell. 1992, 14, 239–256. [Google Scholar] [CrossRef]
  8. Mortenson, M.E. Geometric Modeling; Wiley: New York, NY, USA, 1985. [Google Scholar]
  9. Limaien, A.; Trochu, F. Geometric algorithms for the intersection of curves and surfaces. Comput. Graph. 1995, 19, 391–403. [Google Scholar] [CrossRef]
  10. Press, W.H.; Teukolsky, S.A.; Vetterling, W.T.; Flannery, B.P. Numerical recipes. In C: The Art of Scientific Computing, 2nd ed.; Cambridge University Press: New York, NY, USA, 1992. [Google Scholar]
  11. Elber, G.; Kim, M.S. Geometric Constraint solver using multivariate rational spline functions. In Proceedings of the 6th ACM Symposiumon Solid Modeling and Applications, Ann Arbor, MI, USA, 4–8 June 2001; pp. 1–10. [Google Scholar]
  12. Patrikalakis, N.; Maekawa, T. Shape Interrogation for Computer Aided Design and Manufacturing; Springer: Berlin, Germany, 2001. [Google Scholar]
  13. Polak, E.; Royset, J.O. Algorithms with adaptive smoothing for finite minimax problems. J. Optim. Theory Appl. 2003, 119, 459–484. [Google Scholar] [CrossRef]
  14. Selimovic, I. Improved algorithms for the projection of points on NURBS curves and surfaces. Comput. Aided Geom. Des. 2006, 439–445. [Google Scholar] [CrossRef]
  15. Zhou, J.M.; Sherbrooke, E.C.; Patrikalakis, N. Computation of stationary points of distance functions. Eng. Comput. 1993, 9, 231–246. [Google Scholar] [CrossRef]
  16. Cohen, E.; Lyche, T.; Riesebfeld, R. Discrete B-splines and subdivision techniques in computer-aided geometric design and computer graphics. Comput. Graph. Image Process. 1980, 14, 87–111. [Google Scholar] [CrossRef]
  17. Piegl, L.; Tiller, W. The NURBS Book; Springer: New York, NY, USA, 1995. [Google Scholar]
  18. Park, C.-H.; Elber, G.; Kim, K.-J.; Kim, G.-Y.; Seong, J.-K. A hybrid parallel solver for systems of multivariate polynomials using CPUs and GPUs. Comput.-Aided Des. 2011, 43, 1360–1369. [Google Scholar] [CrossRef]
  19. Bartoň, M. Solving polynomial systems using no-root elimination blending schemes. Comput.-Aided Des. 2011, 43, 1870–1878. [Google Scholar]
  20. van Sosin, B.; Elber, G. Solving piecewise polynomial constraint systems with decomposition and a subdivision-based solver. Comput.-Aided Des. 2017, 90, 37–47. [Google Scholar] [CrossRef]
  21. Bartoň, M.; Elber, G.; Hanniel, I. Topologically guaranteed univariate solutions of underconstrained polynomial systems via no-loop and single-component tests. Comput.-Aided Des. 2011, 43, 1035–1044. [Google Scholar]
  22. Johnson, D.E.; Cohen, E. Distance extrema for spline models using tangent cones. In Proceedings of the 2005 Conference on Graphics Interface, Victoria, Canada, 9–11 May 2005. [Google Scholar]
  23. Li, X.W.; Xin, Q.; Wu, Z.N.; Zhang, M.S.; Zhang, Q. A geometric strategy for computing intersections of two spatial parametric curves. Vis. Comput. 2013, 29, 1151–1158. [Google Scholar] [CrossRef]
  24. Liu, X.-M.; Yang, L.; Yong, J.-H.; Gu, H.-J.; Sun, J.-G. A torus patch approximation approach for point projection on surfaces. Comput. Aided Geom. Des. 2009, 26, 593–598. [Google Scholar] [CrossRef] [Green Version]
  25. Chen, X.-D.; Xu, G.; Yong, J.-H.; Wang, G.Z.; Paul, J.-C. Computing the minimum distance between a point and a clamped B-spline surface. Graph. Models 2009, 71, 107–112. [Google Scholar] [CrossRef] [Green Version]
  26. Chen, X.-D.; Yong, J.-H.; Wang, G.Z.; Paul, J.-C.; Xu, G. Computing the minimum distance between a point and a NURBS curve. Comput.-Aided Des. 2008, 40, 1051–1054. [Google Scholar] [CrossRef] [Green Version]
  27. Oh, Y.-T.; Kim, Y.-J.; Lee, J.; Kim, Y.-S. Gershon Elber, Efficient point-projection to freeform curves and surfaces. Comput. Aided Geom. Des. 2012, 29, 242–254. [Google Scholar] [CrossRef]
  28. Sederberg, T.W.; Nishita, T. Curve intersection using Bézier clipping. Comput.-Aided Des. 1990, 22, 538–549. [Google Scholar] [CrossRef]
  29. Bartoň, M.; Jüttler, B. Computing roots of polynomials by quadratic clipping. Comput. Aided Geom. Des. 2007, 24, 125–141. [Google Scholar]
  30. Li, X.W.; Wu, Z.N.; Hou, L.K.; Wang, L.; Yue, C.G.; Xin, Q. A geometric orthogonal pojection strategy for computing the minimum distance between a point and a spatial parametric curve. Algorithms 2016, 9, 15. [Google Scholar] [CrossRef]
  31. Mørken, K.; Reimers, M. An unconditionally convergent method for computing zeros of splines and polynomials. Math. Comput. 2007, 76, 845–865. [Google Scholar] [CrossRef]
  32. Li, X.W.; Wang, L.; Wu, Z.N.; Hou, L.K.; Liang, J.; Li, Q.Y. Hybrid second-order iterative algorithm for orthogonal projection onto a parametric surface. Symmetry 2017, 9, 146. [Google Scholar] [CrossRef]
  33. Li, X.W.; Wang, L.; Wu, Z.N.; Hou, L.K.; Liang, J.; Li, Q.Y. Convergence analysis on a second order algorithm for orthogonal projection onto curves. Symmetry 2017, 9, 210. [Google Scholar] [CrossRef]
  34. Chen, X.-D.; Ma, W.Y.; Xu, G.; Paul, J.-C. Computing the Hausdorff distance between two B-spline curves. Comput.-Aided Des. 2010, 42, 1197–1206. [Google Scholar] [CrossRef]
  35. Chen, X.-D.; Chen, L.Q.; Wang, Y.G.; Xu, G.; Yong, J.-H.; Paul, J.-C. Computing the minimum distance between two Bézier curves. J. Comput. Appl. Math. 2009, 229, 294–301. [Google Scholar] [CrossRef] [Green Version]
  36. Kim, Y.J.; Oh, Y.T.; Yoon, S.H.; Kim, M.S.; Elber, G. Efficient Hausdorff distance computation for freeform geometric models in close proximity. Comput.-Aided Des. 2013, 45, 270–276. [Google Scholar] [CrossRef]
  37. Sundar, B.R.; Chunduru, A.; Tiwari, R.; Gupta, A.; Muthuganapathy, R. Footpoint distance as a measure of distance computation between curves and surfaces. Comput. Graph. 2014, 38, 300–309. [Google Scholar] [CrossRef]
  38. Hoschek, J.; Lasser, D. Fundamentals of Computer Aided Geometric Design; A. K. Peters: Natick, MA, USA, 1993. [Google Scholar]
  39. Hu, S.M.; Sun, J.G.; Jin, T.G.; Wang, G.Z. Computing the parameter of points on NURBS curves and surfaces via moving affine frame method. J. Softw. 2000, 11, 49–53. [Google Scholar]
  40. Hartmann, E. On the curvature of curves and surfaces defined by normal forms. Comput. Aided Geom. Des. 1999, 16, 355–376. [Google Scholar] [CrossRef]
  41. Li, X.W.; Mu, C.L.; Ma, J.W.; Wang, C. Sixteenth-order method for nonlinear Equations. Appl. Math. Comput. 2010, 215, 3754–3758. [Google Scholar] [CrossRef]
  42. Liang, J.; Li, X.W.; Wu, Z.N.; Zhang, M.S.; Wang, L.; Pan, F. Fifth-order iterative method for solving multiple roots of the highest multiplicity of nonlinear equation. Algorithms 2015, 8, 656–668. [Google Scholar] [CrossRef]
  43. Melmant, A. Geometry and Convergence of Euler’s and Halley’s Methods. SIAM Rev. 1997, 39, 728–735. [Google Scholar] [CrossRef]
  44. Traub, J.F. A Class of Globally Convergent Iteration Functions for the Solution of Polynomial Equations. Math. Comput. 1966, 20, 113–138. [Google Scholar] [CrossRef]
  45. Śmietański, M.J. A perturbed version of an inexact generalized Newton method for solving nonsmooth equations. Numer. Algorithms 2013, 63, 89–106. [Google Scholar] [CrossRef]
  46. Chen, F.; Wang, W.-P.; Liu, Y. Computing singular points of plane rational curves. J. Symb. Comput. 2008, 43, 92–117. [Google Scholar] [CrossRef]
  47. Jia, X.-H.; Goldman, R. Using Smith normal forms and μ-bases to compute all the singularities of rational planar curves. Comput. Aided Geom. Des. 2012, 29, 296–314. [Google Scholar] [CrossRef]
  48. Shi, X.-R.; Jia, X.-H.; Goldman, R. Using a bihomogeneous resultant to find the singularities of rational space curves. J. Symb. Comput. 2013, 53, 1–25. [Google Scholar] [CrossRef]
  49. Burden, R.L.; Faires, J.D. Numerical Analysis, 9th ed.; Brooks/Cole Cengage Learning: Boston, MA, USA, 2011. [Google Scholar]
  50. Piegl, L.A. Ten challenges in computer-aided design. Comput.-Aided Des. 2005, 37, 461–470. [Google Scholar] [CrossRef]
Figure 1. Geometric illustration for convergence analysis.
Figure 1. Geometric illustration for convergence analysis.
Mathematics 06 00306 g001
Figure 2. Geometric illustration for counterexample 1.
Figure 2. Geometric illustration for counterexample 1.
Mathematics 06 00306 g002
Figure 3. Geometric illustration of counterexample 3.
Figure 3. Geometric illustration of counterexample 3.
Mathematics 06 00306 g003
Figure 4. Geometric illustration of counterexample 4.
Figure 4. Geometric illustration of counterexample 4.
Mathematics 06 00306 g004
Figure 5. Geometric illustration for our method. (a) Running the H-H-H method; (b) Judging the H-H-H method whether being satisfied the convergence condition of fixed point theorem for the Newton’s iterative method; (c) Running the Newton’s iterative method.
Figure 5. Geometric illustration for our method. (a) Running the H-H-H method; (b) Judging the H-H-H method whether being satisfied the convergence condition of fixed point theorem for the Newton’s iterative method; (c) Running the Newton’s iterative method.
Mathematics 06 00306 g005
Figure 6. Geometric illustration for the test point p = (2.0, 4.0, 2.0) of Example 1.
Figure 6. Geometric illustration for the test point p = (2.0, 4.0, 2.0) of Example 1.
Mathematics 06 00306 g006
Figure 7. Geometric illustration for the test point p = (2.0, 2.0, 2.0) of Example 1.
Figure 7. Geometric illustration for the test point p = (2.0, 2.0, 2.0) of Example 1.
Mathematics 06 00306 g007
Figure 8. Geometric illustration for the first case of Example 2.
Figure 8. Geometric illustration for the first case of Example 2.
Mathematics 06 00306 g008
Figure 9. Geometric illustration for the second case of Example 2.
Figure 9. Geometric illustration for the second case of Example 2.
Mathematics 06 00306 g009
Figure 10. Geometric illustration for Example 4.
Figure 10. Geometric illustration for Example 4.
Mathematics 06 00306 g010
Figure 11. Geometric illustration for Example 5.
Figure 11. Geometric illustration for Example 5.
Mathematics 06 00306 g011
Table 1. Running time for different initial values of Example 1 by our method with test point p = (2.0, 4.0, 2.0).
Table 1. Running time for different initial values of Example 1 by our method with test point p = (2.0, 4.0, 2.0).
t 0 024568910
α 0.00.00.00.00.00.00.00.0
1498,45475,487105,563116,470123,031134,941154,253156,872
2555,70981,629108,064117,762125,946140,940153,468155,830
3509,17382,824100,744111,206134,367141,705150,013158,715
4564,22277,46596,721114,757129,128173,027150,320158,580
5502,98681,02897,142118,535120,668132,856155,335149,437
6553,19879,520104,307120,795129,351150,085151,073143,065
7576,81474,268100,231115,002132,322139,919154,754159,014
8524,84881,98299,604115,263122,401139,345143,568175,169
9528,84871,228103,186140,023122,040135,006145,434154,016
10547,16170,78999,247121,834125,766136,103149,790153,435
Average536,14277,622101,481119,165126,502142,393150,801156,413
Total Average176,315
Table 2. Running time for different initial values of Example 1 by our method with test point p = (2.0, 2.0, 2.0).
Table 2. Running time for different initial values of Example 1 by our method with test point p = (2.0, 2.0, 2.0).
t 0 024568910
α 0.00.00.00.00.00.00.00.0
1595,51592,371119,904135,660148,751162,758171,535177,355
2648,82591,746119,348135,284148,531162,541171,333176,431
3595,77291,633119,248135,322148,222162,240171,095176,501
4648,47291,565119,139135,355148,165191,884171,366176,395
5595,85691,556119,168135,406148,144162,224171,417176,507
6648,30591,532119,018135,316148,169183,342171,413176,473
7647,40691,587119,069135,283148,197162,291171,282176,397
8595,42391,617119,247135,14014,8101162,116171,342196,529
9646,55183,167119,135172,412148,149162,148171,313176,390
10657,83883,147119,131135,179148,259162,094171,609176,557
Average627,99689,992119,241139,036148,269167,364171,371178,554
Total Average205,228
Table 3. Running time for different initial values of Example 2 by our method with test point p = (1.0, 3.0, 5.0).
Table 3. Running time for different initial values of Example 2 by our method with test point p = (1.0, 3.0, 5.0).
t 0 −2.4−2.1−2−1.8−1.6−1.2−1−0.8
α −1.41180.61917−1.41180.61917−0.059−0.0591.849341.84934
188,69590,50175,13768,49952,01424,73126,29528,444
289,95891,25479,41164,56354,32122,01426,27828,024
383,95695,06379,55363,23754,68322,73324,81328,760
483,62396,03382,02268,07551,09823,27024,57326,707
583,36895,70076,19763,51851,75222,32124,64426,586
683,63197,30380,98462,60853,47321,65824,00928,209
787,28694,65578,48366,84452,27723,50225,55428,725
887,15096,31679,21564,33351,55423,21726,23428,295
986,30089,39994,48766,66550,27923,19025,79126,160
1089,76196,37782,36264,37150,36722,33223,92927,273
Average85,34493,93679,42462,64354,48222,98225,65426,868
Total Average56,417
Table 4. Running time for different initial values of Example 2 by our method with test point p = (2.0, 4.0, 8.0).
Table 4. Running time for different initial values of Example 2 by our method with test point p = (2.0, 4.0, 8.0).
t 0 −2.4−−2.1−−2−1.8−1.6−1.2−1−0.8
α −0.6272−0.1459−0.6272−1.2589−0.14591.858−1.25891.858
1101,366109,66792,79977,98362,86529,46029,75532,649
2102,027108,84492,70977,47762,26927,17729,55532,458
3101,526109,01092,70977,58762,28426,88529,61932,538
4101,266108,90992,72477,44162,37426,78529,55732,478
5101,346108,94492,71477,38662,21426,69129,55932,505
6101,315108,99092,76477,55762,33426,73129,56432,497
7101,415108,83492,61477,58262,41526,72029,57332,512
8101,306108,94592,52877,46162,30926,71529,54832,493
9101,562108,954116,10777,54262,28426,68429,54932,429
10101,235108,91092,93977,61662,31426,69029,59532,451
Average101,436109,00195,06177,56362,36627,05429,58732,501
Total Average66,821
Table 5. Running time for different initial values of Example 2 by the algorithm [26].
Table 5. Running time for different initial values of Example 2 by the algorithm [26].
t 0 −2.4−2.1−2.0−1.8−1.6−1.2−1.0−0.8
α −0.6272−0.1459−0.6272−1.2589−0.14591.858−1.25891.858
1633,173660,734566,675470,236391,687171,352175,965198,543
2597,065628,741565,012485,368367,539161,649185,457197,798
3652,494675,268600,951463,899396,359163,879188,682187,128
4649,281653,066573,597460,967385,325156,876182,979195,214
5622,109687,282568,766472,217402,669170,876189,508202,540
6633,737627,667562,864490,735374,340165,445175,457191,037
7584,705637,608563,523468,230395,411163,631175,676187,539
8607,439693,001585,948449,706400,728161,467189,216187,433
9637,036639,359671,613444,834359,918157,235188,119195,867
10580,678640,082587,577472,368369,954159,834181,510207,033
Average619,772654,281584,653467,856384,393163,225183,257195,013
Total Average406,556
Table 6. Running time for different initial values of Example 3 by our method with test point p = (3, 4, 5, 6, 7).
Table 6. Running time for different initial values of Example 3 by our method with test point p = (3, 4, 5, 6, 7).
t 0 −10−8−6−4481216
α 1.158741.158741.158741.158741.158741.158741.158741.15874
1407,427425,388387,337306,115110,887161,079187,144184,119
2417,729446,171398,801341,895121,148169,115169,954194,671
3420,894390,507383,308260,183115,033165,103171,989198,884
4383,836421,365427,391242,641109,521161,121179,152195,714
5373,696421,551373,171266,584120,844187,930179,184186,309
6374,791445,114373,974242,889119,449183,082180,269201,487
7381,353408,011402,073216,762109,054162,402172,013188,206
8398,662442,008373,328194,821119,236192,990180,472197,299
9364,491417,139396,843230,070110,243164,273204,410196,163
10387,246427,188394,694191,799120,759155,029170,059226,270
Average391,013424,444391,092249,376115,617170,212179,465196,912
Total Average264,766
Table 7. Running time for different initial values of Example 3 by our method with test point p = (30, 40, 50, 60, 70).
Table 7. Running time for different initial values of Example 3 by our method with test point p = (30, 40, 50, 60, 70).
t 0 −10−8−6−4481216
α 1.2352891.2352891.2352891.2352891.2352891.2352891.2352891.235289
11,190,730475,499453,879369,551133,651191,093208,202223,695
21,031,760500,975486,534380,881133,638190,959208,490236,637
3482,018475,395450,480297,272133,674199,528208,292223,312
4428,081475,588475,100277,356133,635186,919208,438223,802
5455,282475,033448,776296,510133,535220,570208,139223,471
6428,321499,776448,617277,353133,590220,625208,046223,213
7428,246474,978474,667247,245133,620192,326208,101230,791
8453,374502,500448,503235,415133,594220,635208,087223,183
9426,949474,816474,167275,526133,546198,204245,226223,213
10452,306499,605448,409235,207134,128173,843208,127262,661
Average577,707485,417460,913289,232133,661199,470211,915229,398
Total Average323,464
Table 8. Running time for different initial values of Example 4 by our method.
Table 8. Running time for different initial values of Example 4 by our method.
t 0 −100−45781011100
α 1.1063051.1063051.1063051.1063051.1063051.1063051.1063051.106305
163,34535,58027,06941,55122,30436,85821,47872,257
263,19236,20328,16041,73320,04238,68020,33871,620
361,30633,83327,40044,19823,07837,70423,75773,108
466,62734,50226,01444,16021,14739,37422,53070,154
562,58335,05329,27542,80020,81739,33923,04673,189
663,95734,39825,65042,28222,18437,37620,07075,872
760,86535,92928,94442,13419,96440,07821,94371,608
863,52235,42727,57841,68823,65039,45621,07676,283
960,55135,50828,56344,54220,28038,46320,59671,781
1062,21633,98727,83046,13022,78139,20920,34973,296
Average62,81635,04227,64843,12221,62538,65421,51872,917
Total Average40,418
Table 9. Running time for different initial values of Example 4 by the H-H-H method.
Table 9. Running time for different initial values of Example 4 by the H-H-H method.
t 0 −100−45781011100
α 1.1063051.1063051.1063051.1063051.1063051.1063051.1063051.106305
1424,579357,276443,858179,583176,984175,859175,249178,445
2425,680358,510179,137177,849182,701176,665176,463207,164
3359,794356,912180,000180,472177,867179,743178,929179,372
4371,119357,214179,567179,804184,542177,675177,854179,651
5358,128358,119232,337179,285179,113175,632177,690181,976
6358,470357,893179,985179,941178,600178,289178,565181,868
7358,083359,391178,815177,857177,613178,014177,385179,361
8477,393357,011178,029179,525175,684176,000175,413180,966
9356,254359,356176,148178,581176,351177,024185,103180,013
10356,801359,773213,327177,252176,993178,060177,655181,427
Average384,630358,146214,120179,015178,645177,296178,031183,024
Total Average231,613
Table 10. Running time for different initial values of Example 4 by the Algorithm [6].
Table 10. Running time for different initial values of Example 4 by the Algorithm [6].
t 0 −100−45781011100
α 1.1063051.1063051.1063051.1063051.1063051.1063051.1063051.106305
1681,353107,102119,083120,328122,504115,181113,566542,116
2725,571124,514136,810121,111116,824111,116117,4665,250,481
3669,249111,052122,151125,261124,865116,105120,3095,523,805
4713,982112,146131,494118,104121,099111,410118,6585,407,166
5699,433111,347118,830121,003118,694115,182124,9175,259,412
6693,396113,323116,046109,176108,194111,420117,3425,508,049
7691,375114,667115,748123,330127,812118,635119,2085,348,517
8663,125107,484127,493120,134116,818111,717117,0795,446,703
9731,148128,918122,897120,947120,985113,777125,4635,251,580
10676,286128,567130,775118,031116,725111,095108,2755,356,125
Average694,492115,912124,133119,743119,452113,564118,2285,377,300
Total Average847,853
Table 11. Running time for different initial values of Example 5 by our method.
Table 11. Running time for different initial values of Example 5 by our method.
t 0 −100−45781011100
α 1.289021.289021.289021.289021.289021.289021.289021.28902
152,01027,42621,79133,32318,39929,55116,48658,995
250,33529,26923,94932,81015,82030,34216,06658,080
349,04726,84123,06137,06319,61131,56919,75657,458
452,65129,12421,40333,83817,47233,29518,58354,566
549,87129,81425,06235,87016,65532,94918,30461,860
653,65128,67819,55035,73118,37331,42916,34259,570
747,27528,11524,17735,45616,93330,51018,01059,042
849,98227,89622,63934,29219,92730,95916,44963,652
949,70429,35922,50234,16417,27430,39116,04461,373
1051,26825,85922,73637,19017,34230,86416,06056,564
Average50,57928,23822,68734,97417,78131,18617,21059,116
Total Average32,721
Table 12. Running time for different initial values of Example 5 by the Algorithm [6].
Table 12. Running time for different initial values of Example 5 by the Algorithm [6].
t 0 −100−45781011100
α 1.289021.289021.289021.289021.289021.289021.289021.28902
1308,942191,002152,199235,287114,568199,404110,512379,771
2348,554175,800146,728232,260102,698190,860101,754352,834
3311,680190,863148,384242,131118,602207,376125,517408,978
4332,421166,849145,131234,536102,795198,956113,523370,826
5319,660185,059160,358235,072108,557211,429119,911350,188
6329,882177,252132,242233,702120,945199,978107,366363,299
7304,977200,038151,398229,166102,315220,162122,013354,466
8326,645171,624137,588228,181113,627195,782108,512369,899
9291,369191,878156,871247,614108,418189,534112,319363,905
10326,221174,148139,415239,836128,377190,831111,411378,781
Average320,035182,451147,031235,779112,090200,431113,284369,294
Total Average210,049
Table 13. Comparison of iterations by different methods in Example 5.
Table 13. Comparison of iterations by different methods in Example 5.
t 0 100.0 4.0 5.0 7.0 8.0 10.0 11.0 100.0
α 1.289021.289021.289021.289021.289021.289021.289021.28902
H-H-H method [38,39,40]NCNCNCNCNCNCNCNC
Second order method [6]75303232332931101
Newton’s methodNCNCNCNCNCNCNCNC
Our method1519171715171523
Table 14. Time comparison of various algorithms.
Table 14. Time comparison of various algorithms.
AlgorithmsOursH-H-HAlgorithm [1]Algorithm [14]Algorithm [6]Algorithm [26]
Time ( μ s)145.5231.6680.81270.8528.9406.6

Share and Cite

MDPI and ACS Style

Liang, J.; Hou, L.; Li, X.; Pan, F.; Cheng, T.; Wang, L. Hybrid Second Order Method for Orthogonal Projection onto Parametric Curve in n-Dimensional Euclidean Space. Mathematics 2018, 6, 306. https://doi.org/10.3390/math6120306

AMA Style

Liang J, Hou L, Li X, Pan F, Cheng T, Wang L. Hybrid Second Order Method for Orthogonal Projection onto Parametric Curve in n-Dimensional Euclidean Space. Mathematics. 2018; 6(12):306. https://doi.org/10.3390/math6120306

Chicago/Turabian Style

Liang, Juan, Linke Hou, Xiaowu Li, Feng Pan, Taixia Cheng, and Lin Wang. 2018. "Hybrid Second Order Method for Orthogonal Projection onto Parametric Curve in n-Dimensional Euclidean Space" Mathematics 6, no. 12: 306. https://doi.org/10.3390/math6120306

Note that from the first issue of 2016, this journal uses article numbers instead of page numbers. See further details here.

Article Metrics

Back to TopTop