You are currently viewing a new version of our website. To view the old version click .
Symmetry
  • Article
  • Open Access

15 May 2018

Integrated Hybrid Second Order Algorithm for Orthogonal Projection onto a Planar Implicit Curve

,
,
,
,
and
1
College of Data Science and Information Engineering, Guizhou Minzu University, Guiyang 550025, China
2
Graduate School, Guizhou Minzu University, Guiyang 550025, China
3
School of Mathematics and Computer Science, Yichun University, Yichun 336000, China
4
Department of Science, Taiyuan Institute of Technology, Taiyuan 030008, China
This article belongs to the Special Issue Advances in Computer Graphics, Geometric Modeling, and Virtual and Augmented Reality

Abstract

The computation of the minimum distance between a point and a planar implicit curve is a very important problem in geometric modeling and graphics. An integrated hybrid second order algorithm to facilitate the computation is presented. The proofs indicate that the convergence of the algorithm is independent of the initial value and demonstrate that its convergence order is up to two. Some numerical examples further confirm that the algorithm is more robust and efficient than the existing methods.

1. Introduction

Due to its great properties, the implicit curve has many applications. As a result, how to render implicit curves and surfaces is an important topic in computer graphics [1], which usually adopts four techniques: (1) representation conversion; (2) curve tracking; (3) space subdivision; and (4) symbolic computation. Using approximate distance tests to replace the Euclidean distance test, a practical rendering algorithm is proposed to rasterize algebraic curves in [2]. Employing the idea that field functions can be combined both on their values and gradients, a set of binary composition operators is developed to tackle four major problems in constructive modeling in [3]. As a powerful tool for implicit shape modeling, a new type of bivariate spline function is applied in [4], and it can be created from any given set of 2D polygons that divides the 2D plane into any required degree of smoothness. Furthermore, the spline basis functions created by the proposed procedure are piecewise polynomials and explicit in an analytical form.
Aside from rendering of computer graphics, implicit curves also play an important role in other aspects of computer graphics. To facilitate applications, it is important to compute the intersection of parametric and algebraic curves. Elimination theory and matrix determinant expression of the resultant in the intersection equations are used in [5]. Some researchers try to transform the problem of intersection into that of computing the eigenvalues and eigenvectors of a numeric matrix. Similar to elimination theory and matrix determinant expression, combining the marching methods with the algebraic formulation generates an efficient algorithm to compute the intersection of algebraic and NURBSsurfaces in [6]. For the cases with a degenerate intersection of two quadric surfaces, which are frequently applied in geometric and solid modeling, a simple method is proposed to determine the conic types without actually computing the intersection and to enumerate all possible conic types in [7]. M.Aizenshtein et al. [8] present a solver to robustly solve well-constrained n × n transcendental systems, which applies to curve-curve, curve-surface intersections, ray-trap and geometric constraint problems.
To improve implicit modeling, many techniques have been developed to compute the distance between a point and an implicit curve or surface. In order to compute the bounded Hausdorff distance between two real space algebraic curves, a theoretical result can reduce the bound of the Hausdorff distance of algebraic curves from the spatial to the planar case in [9]. Ron [10] discusses and analyzes formulas to calculate the curvature of implicit planar curves, the curvature and torsion of implicit space curves and the mean and Gaussian curvature for implicit surfaces, as well as curvature formulas to higher dimensions. Using parametric approximation of an implicit curve or surface, Thomas et al. [11] introduce a relatively small number of low-degree curve segments or surface patches to approximate an implicit curve or surface accurately and further constructs monoid curves and surfaces after eliminating the undesirable singularities and the undesirable branches normally associated with implicit representation. Slightly different from ref. [11], Eva et al. [12] use support function representation to identify and approximate monotonous segments of algebraic curves. Anderson et al. [13] present an efficient and robust algorithm to compute the foot points for planar implicit curves.
Contribution: An integrated hybrid second order algorithm is presented for orthogonal projection onto planar implicit curves. For any test point p, any planar implicit curve with or without singular points and any order of the planar implicit curve, any distance between the test point and the planar implicit curve, the algorithm could be convergent. It consists of two parts: the hybrid second order algorithm and the initial iterative value estimation algorithm.
The hybrid second order algorithm fuses the three basic ideas: (1) the tangent line orthogonal iteration method with one correction; (2) the steepest descent method to force the iteration point to fall on the planar implicit curve as much as it can; (3) Newton–Raphson’s iterative method to accelerate iteration.
Therefore, the hybrid second order algorithm is composed of six steps. The first step uses the steepest descent method of Newton’s iterative method to force the iterative value of the initial value to lie on the planar implicit curve, which is not associated with the test point p. In the second step, Newton’s iterative method employs the relationship determined by the test point p to accelerate the iteration process. The third step finds the orthogonal projection point q on the tangent line, which goes through the initial iterative point, of a test point p. The fourth step gets the linear orthogonal increment value. The same relationship in the second step is used once more to accelerate the iteration process in the fifth step. The final step gives some correction to the result of the iterative value in the fourth and fifth step.
One problem for the hybrid second order algorithm is that it appears divergent if the test point p lies particularly far away from the planar implicit curve. Since it has been found that when the initial iterative point is close to the orthogonal projection point p Γ , no matter how far away the test point p is from the planar implicit curve, it will be convergent, an algorithm, named the initial iterative value estimation algorithm, is proposed to drive the initial iterative value toward the orthogonal projection point p Γ as much as possible. Accordingly, the second order algorithm with the initial iterative value estimation algorithm is named as the integrated hybrid second order algorithm.
The rest of this paper is organized as follows. Section 2 presents related work for orthogonal projection onto the planar implicit curve. Section 3 presents the integrated hybrid second order algorithm for orthogonal projection onto the planar implicit curve. In Section 4, convergent analysis for the integrated hybrid second order algorithm is described. The experimental results including the evaluation of performance data are given in Section 5. Finally, Section 6 and Section 7 conclude the paper.

3. Integrated Hybrid Second Order Algorithm

Let Γ : f ( x ) = f ( x , y ) = 0 be a smooth planar implicit curve, and let p = ( p 1 , p 2 ) be a point in the vicinity of curve Γ (test point). Assume that s is the arc length parameter for the planar implicit curve Γ : f ( x ) = f ( x , y ) = 0 . t = d x / d s , d y / d s is the tangent vector along the implicit curve Γ : f ( x ) = 0 . The orthogonal projection point p Γ to satisfy this relationship:
p Γ = arg min x Γ p x , f ( p Γ ) = 0 , f ( p Γ ) ( p p Γ ) = 0 ,
where ∧ is the difference-product ([14]).

3.1. Orthogonal Tangent Vector Method

The derivative of the planar implicit curve f ( x ) with respect to parameter s is,
t , f = 0 ,
where = x , y is the Hamiltonian operator and the symbol   is the inner product. Its geometric meaning is that the tangent vector t is orthogonal to the corresponding gradient f . The combination of the tangent vector t and Formula (12) will generate:
t , f = 0 , t = 1 .
From (13), it is not difficult to know that the unit tangent vector of t is:
t 0 = f y , f x f .
The following first order iterative algorithm determines the foot point of p on Γ .
y n = x n + s i g n ( p x n , t 0 ) t 0 Δ s ,
where t 0 = f y , f x f , Δ s = q x n . q is the corresponding orthogonal projection point of test point p at the tangent line determined by the initial iterative point x n (see Figure 1). Formula (14) can be expressed as,
q = p ( ( p x n ) , f ( x n ) / f ( x n ) , f ( x n ) ) f ( x n ) , x n + 1 = y n = x n + s i g n ( p x n , t 0 ) t 0 Δ s .
where x n is the initial iterative point. Many numerical tests illustrate that iterative Formula (15) depends on the initial iterative point, namely it is very difficult for the iterative value y n to fall on the planar implicit curve.
Figure 1. Graphic demonstration for the hybrid second order algorithm.

3.2. Steepest Descent Method

To move the iterative value y n to fall on the planar implicit curve f ( x ) as much as possible, a method of preprocessing is introduced. Before the implementation of the iterative Formula (15), the steepest descent method will be adopted, namely a basic Newton’s iterative formula is added such that the iterative value y n falls on the planar implicit curve f ( x ) as much as possible.
y n = x n f ( x n ) / f ( x n ) , f ( x n ) f ( x n ) , q = p ( ( p y n ) , f ( y n ) / f ( y n ) , f ( y n ) ) f ( y n ) , x n + 1 = z n = y n + s i g n ( p y n , t 0 ) t 0 Δ s ,
where t 0 = f y , f x f , Δ s = q y n .

3.3. Linear Calibrating Method

Although more robust than the iterative Formula (15) to a certain extent, the iterative Formula (16) will often change convergence if the test point p or the initial iterative point x 0 takes different values. Especially for large Δ s , iterative point z n will deviate from the planar implicit curve greatly, namely f ( z n ) = Δ e > ε . In this case, a correction for the deviation of iterative point z n is proposed as follows. If f ( z n ) > ε , the increment δ z n = δ x , δ y is used for correction. That is to say, z n = z n + δ z n , z n and z n are the iteration values before and after correction, respectively, and f ( z n ) < ε . The correction aims to make the deviation of the iteration value z n from the planar implicit curve as small as possible. Let δ z n be perpendicular to increment value Δ z n = s i g n ( Δ s ) t 0 Δ s and orthogonal to the planar implicit curve such that δ z n , Δ z n = 0 and f , δ z n = Δ e , where f and Δ e take the value at z n . Then, it is easy to get δ z n = Δ e , 0 f T , ( Δ z n ) T 1 and z n = z n + Δ e , 0 f T , ( Δ z n ) T 1 . The corresponding iterative formula for correction will be,
y n = x n f ( x n ) / f ( x n ) , f ( x n ) f ( x n ) , q = p ( ( p y n ) , f ( y n ) / f ( y n ) , f ( y n ) ) f ( y n ) , z n = y n + s i g n ( p y n , t 0 ) t 0 Δ s , x n + 1 = z n + Δ e , 0 f T , ( Δ z n ) T 1 ,
where t 0 = f y , f x f , Δ s = q y n , Δ z n = s i g n ( p z n , t 0 ) t 0 Δ s , f ( z n ) = Δ e . Obviously the stability and efficiency of the iterative Formula (17) improve greatly, compared with the previous iterative Formulas (15) and (16).

3.4. Newton’s Accelerated Method

Many tests for the iterative Formula (17) conducted indicate that it is sometimes not convergent when the test point lies far from the planar implicit curve. Newton’s accelerated method is then adopted to correct the problem. For the classic Newton second order iterative method, its iterative expression is:
x n + 1 = x n F 0 ( x n ) / F 0 ( x n ) , F 0 ( x n ) F 0 ( x n ) ,
where F 0 ( x ) , F 0 ( x ) is inner product of the gradient of the function F 0 ( x ) with itself. The function F 0 ( x ) is expressed as,
F 0 ( x ) = ( p x ) × f ( x ) = 0 ,
where the symbol [ ] denotes the determinant of a matrix ( p x ) × f ( x ) . In order to improve the stability and rate of convergence, based on the iterative Formula (17), the hybrid second order algorithm is proposed to orthogonally project onto the planar implicit curve f ( x ) . Between Step 1 and Step 2 of the iterative Formula (17) and between Step 3 and Step 4 of the same formula, the iterative Formula (18) is inserted twice. After this, the stability, the rapidity, the efficiency and the numerical iterative accuracy of the iterative algorithm (17) all improve. Then, the iterative formula becomes,
y n = x n f ( x n ) / f ( x n ) , f ( x n ) f ( x n ) , z n = y n F 0 ( y n ) / F 0 ( y n ) , F 0 ( y n ) F 0 ( y n ) , q = p ( ( p z n ) , f ( z n ) / f ( z n ) , f ( z n ) ) f ( z n ) , u n = z n + s i g n ( p z n , t 0 ) t 0 Δ s , v n = u n F 0 ( u n ) / F 0 ( u n ) , F 0 ( u n ) F 0 ( u n ) , x n + 1 = v n + Δ e , 0 f T , ( Δ v n ) T 1 ( i f f T , ( Δ v n ) T = 0 , x n + 1 = v n ) .
where t 0 = f y , f x f , Δ s = q z n , f ( v n ) = Δ e , Δ v n = F 0 ( u n ) / F 0 ( u n ) , F 0 ( u n ) F 0 ( u n ) . Iterative termination for the iterative Formula (20) satisfies: x n + 1 x n < ε . The robustness and the stability of the iterative Formula (20) improves, compared with the previous iteration formulas. That is to say, even for test point p being far away from the planar implicit curve, the iterative Formula (20) is still convergent.
After normalization of the second equation and the fifth equation in the iterative Formula (20), it becomes,
y n = x n f ( x n ) / f ( x n ) , f ( x n ) f ( x n ) , z n = y n F ( y n ) / F ( y n ) , F ( y n ) F ( y n ) , q = p ( ( p z n ) , f ( z n ) / f ( z n ) , f ( z n ) ) f ( z n ) , u n = z n + s i g n ( p z n , t 0 ) t 0 Δ s , v n = u n F ( u n ) / F ( u n ) , F ( u n ) F ( u n ) , x n + 1 = v n + Δ e , 0 f T , ( Δ v n ) T 1 ( i f f T , ( Δ v n ) T = 0 , x n + 1 = v n ) ,
where F ( x ) = F 0 ( x ) f ( x ) , f ( x ) . The iterative Formula (21) can be implemented in six steps. The first step computes the point x n on the planar implicit curve using the basic Newton’s iterative formula, which is not associated with test point p for any initial iterative point. The second step uses Newton’s iterative method to accelerate the whole iteration process and get the new iterative point z n , which is associated with test point p. The third step gets the orthogonal projection point q (footpoint) at the tangent line to f ( x ) . The fourth equation in iterative Formula (21) yields the new iterative point u n . The third step and the fourth step compute the linear orthogonal increment, which is the core component (including linear calibrating method of sixth step) of the iterative Formula (21). The fifth step accelerates the previous steps again and yields the iterative point, which is associated with test point p. The sixth step corrects the iterative result for the previous three steps. Therefore, the whole six steps ensure the robustness of the whole iteration process. The above procedure is repeated until the iterative point coincides with the orthogonal projection point p Γ (see Figure 1 and the detailed explanation of Remark 3).
Remark 1.
In the actual implementation of the iterative Formula (21) of the hybrid second order algorithm (Algorithm 1), three techniques are used to optimize the process. On the right-hand side of Step 1, Step 2, Step 3 and step 5, the part in parentheses is calculated firstly and then the part outside the parentheses to prevent overflow of the intermediate calculation process. Error handling for the second term is added in the right-hand side of Step 4 in the iterative Formula (21). Namely, if p z n , t 0 = 0 , sign ( p z n , t 0 ) = 1. For the second term of the right-hand side in Step 6 of the iterative Formula (21), if the determinant of f T , ( Δ v n ) T is zero, then x n + 1 = v n . Namely, if any component of Δ e , 0 f T , ( Δ v n ) T 1 equals zero, then substitute the sixth step with x n + 1 = v n to avoid the overflow problem.
According to the analyses above, the hybrid second order algorithm is presented as follows.
Algorithm 1: Hybrid second order algorithm.
  Input: Initial iterative value x 0 , test point p and planar implicit curve f ( x ) = 0 .
  Output: The orthogonal projection point p Γ .
  Description:
  Step 1:
     x n + 1 = x 0 ;
      do{
         x n = x n + 1 ;
        Update x n + 1 according to the iterative Formula (21);
      }while( x n + 1 x n 2 > ε 1 );
  Step 2:
         p Γ = x n + 1 ;
    return p Γ ;
Remark 2.
Many tests demonstrate that if the test point p is not far away from the planar implicit curve, Algorithm 1 will converge for any initial iterative point x 0 . For instance, assume a planar implicit curve f ( x , y ) = x 6 + 2 x 5 y 2 x 3 y 2 + x 4 y 3 + 2 y 8 4 and four different test points ( 13 , 7 ) , ( 3 , 4 ) , ( 2 , 2 ) , ( 7 , 3 ) ; Algorithm 1 converges efficiently for the given initial iterative value. See Table 1 for details, where p is the test point, x 0 is the initial iterative point, iterations is the number of iterations, f ( p Γ ) is the absolute function value with the orthogonal projection point p Γ and Error_2 = ( p p Γ ) × f ( p Γ ) .
Table 1. Convergence of the hybrid second order algorithm for four given test points.
However, when the test point p is far away from the planar implicit curve, no matter whether the initial iterative point p is close to the planar implicit curve, Algorithm 1 sometimes produces oscillation such that subsequent iterations could not ensure convergence. For example, for the same planar implicit curve with test point p = ( 17 , 11 ) and initial iterative point x 0 = ( 2 , 2 ) , it constantly produces oscillation such that subsequent iterations could not ensure convergence after 838 iterations (see Table 2).
Table 2. Oscillation of the hybrid second order algorithm for the planar implicit curve with a far-away test point.

3.5. Initial Iterative Value Estimation Algorithm

Through Remark 2, when the test point p is not far away from the planar implicit curve, with the initial iterative point x 0 in any position, Algorithm 1 could ensure convergence. However, when the test point p is far away from the planar implicit curve, even if the initial iterative point x 0 is close to the planar implicit curve, Algorithm 1 sometimes produces oscillation such that subsequent iterations could not ensure convergence. This is essentially a problem for any Newton-based method. Consider high nonlinearity, which cannot be captured just by f ( x , y ) = 0 , i.e, the surface [ x , y , f ( x , y ) ] is very oscillatory in the neighborhood of the z = 0 plane, but it does intersect the z = 0 plane in a single closed branch. Under this case, for far-away test point p from the planar implicit curve, any Newton-based method sometimes will produce oscillation to cause non-convergence. We will give a counter example in Remark 2. To solve the problem of non-convergence, some method is proposed to put the initial iterative point x 0 close to the orthogonal projection point p Γ . Therefore, the task changes to construct an algorithm such that the initial iterative value x 0 of the iterative Formula (21) and the orthogonal projection point p Γ are as close as possible. The algorithm can be summarized as follows. Input an initial iterative point x 0 , and repeatedly iterate with the basic Newton’s iterative formula y = x f ( x ) / f ( x ) , f ( x ) f ( x ) such that the iterative point lies on the planar implicit curve f ( x ) (see Figure 2c). After that, iterate once through the formula q = p ( ( p x ) , f ( x ) / f ( x ) , f ( x ) ) f ( x ) where the blue point denotes the initial iterative value (see Figure 2c). After the first round iteration in Figure 2, then replace the initial iterative value with the iterated value q, and do the second round iteration (see Figure 3). After the second round iteration, replace the initial iterative value with the iterated value q, and do the third round iteration (see Figure 4). The detailed algorithm is the following.
Figure 2. The entire graphical demonstration of the first round iteration in Algorithm 2. (a) Initial status; (b) Intermediate status; (c) Final status.
Figure 3. The entire graphical demonstration of the second round iteration in Algorithm 2. (a) Initial status; (b) Intermediate status; (c) Final status.
Figure 4. The entire graphical demonstration of the third round iteration in Algorithm 2. (a) Initial status; (b) Intermediate status; (c) Final status.
Firstly, the notations for Figure 2, Figure 3 and Figure 4 are clarified. Black and green points represent test point p and orthogonal projection point p Γ , respectively. The blue point denotes x n + 1 = x n ( f ( x n ) / f ( x n ) , f ( x n ) ) f ( x n ) of Step 2 in Algorithm 2, whether it is on the planar implicit curve f ( x ) or not. The footpoint q(red point) denotes q in Step 3 of Algorithm 2, and the brown curve describes the planar implicit curve f ( x ) = 0 .
Secondly, Algorithm 2 is interpreted geometrically. Step 2 in Algorithm 2 uses basic Newton’s iterative method. That is to say, it repeatedly iterates using the steepest descent method in Section 3.2 until the blue point x n + 1 = x n ( f ( x n ) / f ( x n ) , f ( x n ) ) f ( x n ) of Step 2 lies on the planar implicit curve f ( x ) . At the same time, through Step 3 in Algorithm 2, it yields footpoint q (see Figure 2). The integer n of the iteration round counts one after the first round iteration of Algorithm 2. When the blue point is on the planar implicit curve f ( x ) , at this time, replace the initial iterative value with the iterated value q, and do the second round iteration; the integer n of the iteration round counts two (see Figure 3). Replace the initial iterative value with the iterated value q again after the second round iteration, and do the third round iteration; the integer n of the iteration round counts three (see Figure 4). When n = 3 in Step 4, then exit Algorithm 2. At this time, the current footpoint q from Algorithm 2 will be the initial iterative value for Algorithm 1.
Algorithm 2: Initial iterative value estimation algorithm.
  Input: Initial iterative value x 0 , test point p and planar implicit curve f ( x ) = 0 .
  Output: The footpoint point q.
  Description:
  Step 1: n = 0 ; x n + 1 = x 0 ;
  Step 2:
      do{
        x n = x n + 1 ;
        x n + 1 = x n f ( x n ) / f ( x n ) , f ( x n ) f ( x n ) ;
       }while( x n + 1 x n 2 > ε 2 );
  Step 3: q = p ( ( p x n + 1 ) , f ( x n + 1 ) / f ( x n + 1 ) , f ( x n + 1 ) ) f ( x n + 1 ) ;
  Step 4: n = n + 1 ;
      if ( n 3 ) {
           x n + 1 = q ;
          go to Step 1;
        }
     else
      return q;
Thirdly, the reason for choosing n = 3 in Algorithm 2 is explained. Many cases are tested for planar implicit curves with no singular point. As long as n = 2 , the output value from Algorithm 2 could be used as the initial iterative value of Algorithm 1 to get convergence. However, if the planar implicit curve has singular points or big fluctuation and oscillation appear, n = 3 can guarantee the convergence. In a future study, a more optimized and efficient algorithm needs to be developed to automatically specify the integer n.

3.6. Integrated Hybrid Second Order Algorithm

Algorithm 2 can optimize the initial iterative value for Algorithm 1. Then, Algorithm 1 can project the test point p onto planar implicit curve f ( x ) . The integrated hybrid second order algorithm (Algorithm 3) is presented to take advantage of Algorithms 1 and 2, which are denoted as Algorithm 1 ( ( x 0 , p , f ( x ) ) and Algorithm 2 ( q , p , f ( x ) ) for convenience, respectively. Algorithm 3 can be described as follows (see Figure 5).
Figure 5. The entire graphical demonstration for the whole iterative process of Algorithm 3. (a) Initial status; (b) First intermediate status; (c) Second intermediate status; (d) Third intermediate status; (e) Fourth intermediate status; (f) Final status.
Firstly, the notations for Figure 5 are clarified, which describes the entire iterative process in Algorithm 3. The black point is test point p; the green point is orthogonal projection point p Γ ; the blue point is the left-hand side value of the equality of the first step of the iterative Formula (21) in Algorithm 1; footpoint q (red point) is the left-hand side value of the equality of the third step of the iterative Formula (21) in Algorithm 1; and the brown curve represents the planar implicit curve f ( x ) .
Algorithm 3: Integrated hybrid second order algorithm.
  Input: Initial iterative value x 0 , test point p and planar implicit curve f ( x ) = 0 .
  Output: The orthogonal projection point p Γ .
  Description:
  Step 1: q = Algorithm 2 ( x 0 , p , f ( x ) ) ;
  Step 2: p Γ = Algorithm 1 ( q , p , f ( x ) ) ;
  Step 3: return p Γ ;
Secondly, Algorithm 3 is interpreted. The output from Algorithm 2 is taken as the initial iterative value for Algorithm 1 (see footpoint q or the red point in Figure 4c). Algorithm 1 repeatedly iterates until it satisfies the termination criteria ( x n + 1 x n < ε ) (see Figure 5). The six subgraphs in Figure 5 represent successive steps in the entire iterative process of Algorithm 1. In the end, three points of green, blue and red merge into orthogonal projection point p Γ (see Figure 5f).
Remark 3.
Algorithm 3 with two sub-algorithms is interpreted geometrically, where Algorithms 1 and 2 are graphically demonstrated by Figure 6 and Figure 7, respectively. In Figure 6a and Figure 7a, several closed loops represent the orthogonal projection of the contour lines on the surface z = f ( x , y ) onto the horizontal plane x y , respectively. In Figure 7b,e, several closed loops also represent orthogonal projection of the contour lines on the surface z = F ( x , y ) onto the horizontal plane x y , respectively. In Figure 6a, the vector starting with point x 0 is gradient f ( x 0 ) , and the length of the vector is f ( x 0 ) / f ( x 0 ) , f ( x 0 ) . For arbitrary initial iterative point x 0 , the iterative formula x n + 1 = x n f ( x n ) / f ( x n ) , f ( x n ) f ( x n ) (Step 2 of Algorithm 2) from the steepest descent method repeatedly iterates until the iterative point x n lies on the planar implicit curve f ( x ) . In Figure 6b, the footpoint q, i.e., the intersection of tangent line (from the point x n on the planar implicit curve f ( x ) ) and perpendicular line (from test point p) is acquired by Step 3 in Algorithm 2. After the first round iteration of Algorithm 2, replace the initial iterative point x 0 with the footpoint q, and then, do the second round and the third round iteration. The three rounds of iteration constitute Algorithm 2 and part of Algorithm 3.
Figure 6. The entire graphical demonstration of Algorithm 2. (a) Step 2 of Algorithm 2; (b) Step 3 of Algorithm 2.
Figure 7. The entire graphical demonstration of Algorithm 1. (a) The first step of the iterative Formula (21); (b) The second step of the iterative Formula (21); (c) The third step of the iterative Formula (21); (d) The fourth step of the iterative Formula (21); (e) The fifth step of the iterative Formula (21); (f) The sixth step of the iterative Formula (21).
In each sub-figure of Figure 7, points p and p Γ are the test point and the corresponding orthogonal projective point, respectively. In Figure 7a, the vector starting with point x n is gradient f ( x n ) , and the length of the vector is f ( x n ) / f ( x n ) , f ( x n ) . For the initial iterative point x n from Algorithm 2, the iterative formula y n = x n f ( x n ) / f ( x n ) , f ( x n ) f ( x n ) (Step 1 of Algorithm 1) from the steepest descent method iterates once. In Figure 7b, the vector starting with point y n is gradient F ( y n ) , and the length of the vector is F ( y n ) / F ( y n ) , F ( y n ) . For the initial iterative point y n from Step 1 in Algorithm 1, F ( x ) = ( p x ) × f ( x ) f ( x ) , f ( x ) , the iterative formula z n = y n F ( y n ) / F ( y n ) , F ( y n ) F ( y n ) (Step 2 of Algorithm 1) from the steepest descent method iterates once. In Figure 7c, the footpoint q, i.e., the intersection of tangent line (from the point z n on the planar implicit curve f ( x ) ) and the perpendicular line (from test point p), is acquired by Step 3 in Algorithm 1. In the actual iterative process, point z n is approximately equivalent to the point z n . In Figure 7d, point u n comes form the fourth step of Algorithm 1, which aims to obtain a linear orthogonal increment. In Figure 7e, the vector starting with point u n is gradient F ( u n ) , and the length of the vector is F ( u n ) / F ( u n ) , F ( u n ) . For the initial iterative point u n from Step 4 in Algorithm 1, the iterative formula v n = u n F ( u n ) / F ( u n ) , F ( u n ) F ( u n ) (Step 5 of Algorithm 1) from the steepest descent method iterates once more. In Figure 7f, the iterative point x n + 1 from the sixth step in Algorithm 1 gives a correction for the iterative point v n from the fifth step in Algorithm 1. Repeatedly iterate the above six steps until the iteration exit criteria are met. In the end, three points of footpoint q, the iterative point x n + 1 and the orthogonal projecting point p Γ merge into orthogonal projection point p Γ . These six steps constitute Algorithm 1 and part of Algorithm 3.

4. Convergence Analysis

In this section, the convergence analysis for the integrated hybrid second order algorithm is presented. Proofs indicate the convergence order of the algorithm is up to two, and Algorithm 3 is independent of the initial value.
Theorem 1.
Given an implicit function f ( x ) that can be parameterized, the convergence order of the iterative Formula (21) is up to two.
Proof. 
Without loss of generality, assume that the parametric representation of the planar implicit curve Γ : f ( x ) = 0 is c ( t ) = ( f 1 ( t ) , f 2 ( t ) ) . Suppose that parameter α is the orthogonal projection point of test point p = ( p 1 , p 2 ) onto the parametric curve c ( t ) = ( f 1 ( t ) , f 2 ( t ) ) . ☐
The first part will derive that the order of convergence of the first step for the iterative Formula (21) is up to two. It is not difficult to know the iteration equation in the corresponding Newton’s second order parameterized iterative method, i.e., the first step for the iterative Formula (21):
t n + 1 = t n c ( t n ) c ( t n ) .
Taylor expansion around α generates:
c ( t n ) = c 0 + c 1 e n + c 2 e n 2 + o ( e n 3 ) ,
where e n = t n α and c i = ( 1 / i ! ) ( f ( i ) ( α ) ) , i = 0 , 1 , 2 . Thus, it is easy to have:
c ( t n ) = c 1 + 2 c 2 e n + o ( e n 2 ) .
From (22)–(24), the error iteration can be expressed as,
e n + 1 = C 0 e n 2 + o ( e n 3 ) ,
where C 0 = c 2 c 1 .
The second part will prove that the order of convergence of the second step for the iterative Formula (21) is two. It is easy to get the corresponding parameterized iterative equation for Newton’s second-order iterative method, essentially the second step for the iterative Formula (21),
t n + 1 = t n F ( t n ) F ( t n ) ,
where:
F ( t ) = p c ( t ) , c ( t ) = 0 .
Using Taylor expansion around α , it is easy to get:
F ( t n ) = b 0 + b 1 e n + b 2 e n 2 + o ( e n 3 ) ,
where e n = t n α and b i = ( 1 / i ! ) ( F ( α ) ) , i = 0 , 1 , 2 . Thus, it is easy to get:
F ( t n ) = b 1 + 2 b 2 e n + o ( e n 2 ) .
According to Formula (26)–(29), after Taylor expansion and simplifying, the error relationship can be expressed as follows,
e n + 1 = C 1 e n 2 + o ( e n 3 ) ,
where C 1 = b 2 b 1 . Because the fifth step is completely equal to the second step of the iterative Formula (21) and outputs from Newton’s iterative method are closely related with test point p, the order of convergence for the fifth step of the iterative Formula (21) is also two.
The third part will derive that the order of convergence of the third step and fourth step for iterative Formula (21) is one. According to the first order method for orthogonal projection onto the parametric curve [32,39,40], the footpoint q = ( q 1 , q 2 ) of the parameterized iterative equation of the third step of the iterative Formula (21) can be expressed in the following way,
q = c ( t n ) + Δ t c ( t n ) .
From the iterative Equation (31) and combining with the fourth step of the iterative Formula (21), it is easy to have:
Δ t = c ( t n ) , q c ( t n ) c ( t n ) , c ( t n ) ,
where x , y denotes the scalar product of vectors x , y R 2 . Let t n + Δ t t n , and repeat the procedure (32) until Δ t is less than a given tolerance ε . Because parameter α is the orthogonal projection point of test point p = ( p 1 , p 2 ) onto the parametric curve c ( t ) = ( f 1 ( t ) , f 2 ( t ) ) , it is not difficult to verify,
p c ( α ) , c ( α ) = 0 .
Because the footpoint q is the intersection of the tangent line of the parametric curve c ( t ) at t = t n and the perpendicular line p q determined by the test point p, the equation of the tangent line of the parametric curve c ( t ) at t = t n is:
x 1 = f 1 ( t n ) + f 1 ( t n ) s , x 2 = f 2 ( t n ) + f 2 ( t n ) s .
At the same time, the vector of the line segment connected by the test point p and the point c ( t n ) is:
( y 1 , y 2 ) = ( p 1 x 1 , p 2 x 2 ) .
The vector (35) and the tangent vector c ( t n ) = ( f 1 ( t n ) , f 2 ( t n ) ) of the tangent line (34) are mutually orthogonal, so the parameter value s 0 of the tangent line (34) is:
s 0 = p c ( t n ) , c ( t n ) c ( t n ) , c ( t n ) .
Substituting (36) into (34) and simplifying, it is not difficult to get the footpoint q = ( q 1 , q 2 ) ,
q 1 = f 1 ( t n ) + f 1 ( t n ) s 0 , q 2 = f 2 ( t n ) + f 2 ( t n ) s 0 .
Substituting (37) into (32) and simplifying, it is easy to obtain,
Δ t = p c ( t n ) , c ( t n ) c ( t n ) , c ( t n ) .
From (33) and combined with (38), using Taylor expansion by the symbolic computation software Maple 18, it is easy to get:
Δ t = 2 c 2 ( c 0 p ) c 1 2 c 1 2 e n + o ( e n 2 ) .
Simplifying (30), it is easy to obtain:
e n + 1 = 2 c 2 ( c 0 p ) c 1 2 e n + o ( e n 2 ) , = C 2 e n + o ( e n 2 ) ,
where the symbol C 2 denotes the coefficient in the first order error e n of the right-hand side of Formula (40). The result shows that the third step and the fourth step of the iterative Formula (21) comprise the first order convergence. According to the iterative Formula (21) and combined with three error iteration relationships (25), (30) and (40), the convergent order of each iterative formula is not more than two. Then, the iterative error relationship of the iterative Formula (21) can be expressed as follows:
e n + 1 = C 0 C 1 C 2 e n 2 + o ( e n 3 ) .
To sum up, the convergence order of the iterative Formula (21) is up to two.
Theorem 2.
The convergence of the hybrid second order algorithm (Algorithm 1) is a compromise method between the local and global method.
Proof. 
The third step and fourth step of the iterative Formula (21) of Algorithm 1 are equivalent to the foot point algorithm for implicit curves in [32]. The work in [14] has explained that the convergence of the foot point algorithm for the implicit curve proposed in [14] is a compromise method between the local and global method. Then, the convergence of Algorithm 1 is also a compromise method between the local and global method. Namely, if a test point is close to the foot point of the planar implicit curve, the convergence of Algorithm 1 is independent of the initial iterative value, and if not, the convergence of Algorithm 1 is dependent on the initial iterative value. The sixth step in Algorithm 1 promotes the robustness. However, the third step, the fourth step and the sixth step in Algorithm 1 still constitute a compromise method between the local and global ones. Certainly, the first step (steepest descent method) of Algorithm 1 can make the iterative point fall on the planar implicit curve and improves its robustness. The second step and the fifth step constitute the classical Newton’s iterative method to accelerate convergence and improve robustness in some way. The steepest descent method of the first step and Newton’s iterative method of the second step and the fifth step in Algorithm 1 are more robust and efficient, but they can change the fact that Algorithm 1 is the compromise method between the local and global ones. To sum up, Algorithm 1 is the compromise method between the local and global ones. ☐
Theorem 3.
The convergence of the integrated hybrid second order algorithm (Algorithm 3) is independent of the initial iterative value.
Proof. 
The integrated hybrid second order algorithm (Algorithm 3) is composed of two parts sub-algorithms (Algorithm 1 and Algorithm 2). From Theorem 2, Algorithm 1 is a compromise method between the local and global method. Of course, whether the test point p is very far away or not far away from the planar implicit curve f ( x ) , if the initial iterative value lies close to the orthogonal projection point p Γ , Algorithm 1 could be convergent. In any case, Algorithm 2 can change the initial iterative value of Algorithm 1 sufficiently close to the orthogonal projection point p Γ to ensure the convergence of Algorithm 1. In this way, Algorithm 3 can converge for any initial iterative value. Therefore, the convergence of the integrated hybrid second order algorithm (Algorithm 3) is independent of the initial value. ☐

5. Results of the Comparison

Example 1.
([14]) Assume a planar implicit curve Γ : f ( x , y ) = ( y 5 + x 3 x 2 + 4 27 ) ( x 2 + 1 ) = 0 . One thousand and six hundred test points from the square 2 , 2 × 2 , 2 are taken. The integrated hybrid second order algorithm (Algorithm 3) can orthogonally project all 1600 points onto planar implicit curve Γ. It satisfies the relationships f ( p Γ ) < 10 10 and ( p p Γ ) × f ( p Γ ) < 10 10 .
It consists of two steps to select/sample test points:
(1) Uniformly divide planar square 2 , 2 × 2 , 2 of the planar implicit curve into m 2 = 1600 sub-regions a i , a i + 1 × c j , c j + 1 , i , j = 0 , 1 , 2 , , m 1 , where a = a 0 = 2 , a i + 1 a i = b a m = 1 / 10 , b = a m = 2 , c = c 0 = 2 , c j + 1 c j = d c m = 1 / 10 , d = c m = 2 .
(2) Randomly select a test point in each sub-region and then an initial iterative value in its vicinity.
The same procedure to select/sample test points applies for other examples below.
One test point p = ( 0.1 , 1.0 ) in the first case is specified. Using Algorithm 3, the corresponding orthogonal projection point is p Γ = (−0.47144354751227009, 0.70879213227958752), and the initial iterative values x 0 are (−0.1,0.8), (−0.1,0.9), (−0.1,1.1), (−0.1,1.2), (−0.2,0.8), (−0.2,0.9), (−0.2,1.1) and (−0.2,1.2), respectively. Each initial iterative value iterates 12 times, respectively, yielding 12 different iteration times in nanoseconds. In Table 3, the average running times of Algorithm 3 for eight different initial iterative values are 1,099,243, 582,078, 525,942, 490,537, 392,090, 364,817, 369,739 and 367,654 nanoseconds, respectively. In the end, the overall average running time is 524,013 nanoseconds, while the overall average running time of the circle shrinking algorithm in [14] is 8.9 ms under the same initial iteration condition.
Table 3. Running time for different initial iterative values by Algorithm 3 in Example 1.
The iterative error analysis for the test point p = ( 0.1 , 1.0 ) under the same condition is presented in Table 4 with initial iterative points in the first row. The distance function x n p Γ , x n p Γ is used to compute error values in other rows than the first one, and other examples below apply the same criterion of the distance function. The left column in Table 4 denotes the corresponding number of iterations, which is the same for Tables 8–15.
Table 4. The error analysis of the iteration process of Algorithm 3 in Example 1.
Another test point p = ( 0.2 , 1.0 ) in the second case is specified. Using Algorithm 3, the corresponding orthogonal projection point is p Γ = (−0.42011639143389254, 0.63408011508207950), and the initial iterative values x 0 are (0.3,0.9), (0.3,1.2), (0.4,0.9), (0.3,0.7), (0.1,0.8), (0.1,0.6), (0.4,1.1), (0.4,1.3), respectively. Each initial iterative value iterates 10 times, respectively, yielding 10 different iteration times in nanoseconds. In Table 5, the average running times of Algorithm 3 for eight different initial iterative values are 1,152,664, 844,250, 525,540, 1,106,098, 1,280,232, 1,406,429, 516,779 and 752,429 nanoseconds, respectively. In the end, the overall average running time is 948,053 nanoseconds, while the overall average running time of the circle shrinking algorithm in [14] is 12.6 ms under the same initial iteration condition.
Table 5. Running times for different initial iterative values by Algorithm 3 in Example 1.
The third test point p = ( 0.1 , 0.1 ) in the third case is specified. Using Algorithm 3, the corresponding orthogonal projection point is p Γ = ( 0.33334322619432892 , 0.099785192603767206 ) , and the initial iterative values x 0 are (0.1,0.2), (0.1,0.3), (0.1,0.4), (0.2,0.2), (0.2,0.3), (0.3,0.2), (0.3,0.3), (0.3,0.4), respectively. Each initial iterative value iterates 12 times, respectively, yielding 12 different iteration times in nanosecond. In Table 6, the average running times of Algorithm 3 for eight different initial iterative values are 183,515, 680,338, 704,694, 192,564, 601,235, 161,127, 713,697 and 1,034,443 nanoseconds, respectively. In the end, the overall average running time is 533,952 nanoseconds, while the overall average running time of the circle shrinking algorithm in [14] is 9.4 ms under the same initial iteration condition.
Table 6. Running times for different initial iterative values by Algorithm 3 in Example 1.
To sum up, Algorithm 3 is faster than the circle shrinking algorithm in [14] (see Figure 8).
Figure 8. Graphic demonstration for Example 1.
Example 2.
Assume a planar implicit curve Γ : f ( x , y ) = x 6 + 4 x y + 2 y 18 1 = 0 . Nine hundred test points from square 1.5 , 1.5 × 1.5 , 1.5 are taken. Algorithm 3 can rightly orthogonally project all 900 points onto planar implicit curve Γ. It satisfies the relationships f ( p Γ ) < 10 10 and ( p p Γ ) × f ( p Γ ) < 10 10 . One test point p = ( 1.5 , 0.5 ) in this case is specified. Using Algorithm 3, the corresponding orthogonal projection point is p Γ = (−1.2539379406252056281, 0.57568037362837924613), and the initial iterative values x 0 are (−1.4,0.6), (−1.3,0.7), (−1.2,0.6), (−1.6,0.4), (−1.4,0.7), (−1.4,0.3), (−1.3,0.6), (−1.2,0.8), respectively. Each initial iterative value iterates 10 times, respectively, yielding 10 different iteration times in nanoseconds. In Table 7, the average running times of Algorithm 3 for eight different initial iterative values are 4,487,449, 4,202,203, 4,555,396, 4,533,326, 4,304,781, 4,163,107, 4,268,792 and 4,378,470 nanoseconds, respectively. In the end, the overall average running time is 4,361,691 nanoseconds (see Figure 9).
Table 7. Running times for different initial iterative values by Algorithm 3 in Example 2.
Figure 9. Graphic demonstration for Example 2.
The iterative error analysis for the test point p = (−1.5,0.5) under the same condition is presented in Table 8 with initial iterative points in the first row.
Table 8. The error analysis of the iteration process of Algorithm 3 in Example 2.
Example 3.
Assume a planar implicit curve Γ : f ( x , y ) = 12 ( x 2 ) 8 + ( x 2 ) ( y 3 ) ( y 3 ) 4 1 = 0 . Three thousand and six hundred points from square 0.0 , 4.0 × 3.0 , 6.0 are taken. Algorithm 3 can can orthogonally project all 3600 points onto planar implicit curve Γ. It satisfies the relationships f ( p Γ ) < 10 10 and ( p p Γ ) × f ( p Γ ) < 10 10 . One test point p = ( 5.0 , 4.0 ) in this case is specified. Using Algorithm 3, the corresponding orthogonal projection point is p Γ = (−0.027593939033081903,−4.6597845115690539), and the initial iterative values x 0 are (−12,−7), (−3,−5), (−5,−4), (−6.6,−9.9), (−2,−7), (−11,−6), (−5.6,−2.3), (−4.3,−5.7), respectively. Each initial iterative value iterates 10 times, respectively, yielding 10 different iteration times in nanoseconds. In Table 9, the average running times of Algorithm 3 for eight different initial iterative values are 299,569, 267,569, 290,719, 139,263, 125,962, 149,431, 289,643 and 124,885 nanoseconds, respectively. In the end, the overall average running time is 210,880 nanoseconds (see Figure 10).
Table 9. Running times for different initial iterative values by Algorithm 3 in Example 3.
Figure 10. Graphic demonstration for Example 3.
The iterative error analysis for the test point p = ( 5 , 4 ) under the same condition is presented in Table 10 with initial iterative points in the first row.
Table 10. The error analysis of the iteration process of Algorithm 3 in Example 3.
Example 4.
Assume a planar implicit curve Γ : f ( x , y ) = x 6 + 2 x 5 y 2 x 3 y 2 + x 4 y 3 + 2 y 8 4 = 0 . Two thousand one hundred test points from the square 2.0 , 4.0 × 2.0 , 1.5 are taken. Algorithm 3 can orthogonally project all 2100 points onto planar implicit curve Γ. It satisfies the relationships f ( p Γ ) < 10 10 and ( p p Γ ) × f ( p Γ ) < 10 10 . One test point p = ( 2.0 , 2.0 ) in this case is specified. Using Algorithm 3, the corresponding orthogonal projection point is p Γ = (2.1654788271485294, −1.5734131236664724), and the initial iterative values x 0 are (2.2,−2.1), (2.3,−1.9), (2.4,−1.8), (2.1,−2.3), (2.4,−1.6), (2.3,−1), (1.6,−2.5), (2.6,−2.5), respectively. Each initial iterative value iterates 10 times, respectively, yielding 10 different iteration times in nanoseconds. In Table 11, the average running times of Algorithm 3 for eight different initial iterative values are 403,539, 442,631, 395,384, 253,156, 241,510, 193,592, 174,340 and 187,362 nanoseconds, respectively. In the end, the overall average running time is 286,439 nanoseconds (see Figure 11).
Table 11. Running times for different initial iterative values by Algorithm 3 in Example 4.
Figure 11. Graphic demonstration for Example 4.
The iterative error analysis for the test point p = ( 2 , 2 ) under the same condition is presented in Table 12 with initial iterative points in the first row.
Table 12. The error analysis of the iteration process of Algorithm 3 in Example 4.
Example 5.
Assume a planar implicit curve Γ : f ( x , y ) = x 15 + 2 x 5 y 2 x 3 y 2 + x 4 y 3 4 y 18 4 = 0 . Tow thousand four hundred test points from the square 0 , 3 × 3 , 3 are taken. Algorithm 3 can orthogonally project all 2400 points onto planar implicit curve Γ. It satisfies the relationships f ( p Γ ) < 10 10 and ( p p Γ ) × f ( p Γ ) < 10 10 .
One test point p = ( 12 , 20 ) in this case is specified. Using Algorithm 3, the corresponding orthogonal projection point is p Γ = (16.9221067487652, −9.77831982969495), and the initial iterative values x 0 are (12,−20), (3,−5), (5,−4), (66,−99), (14,−21), (11,−6), (56,−23), (13,−7), respectively. Each initial iterative value iterates 10 times, respectively, yielding 10 different iteration times in nanoseconds. In Table 13, the average running times of Algorithm 3 for eight different initial iterative values are 285,449, 447,036, 405,726, 451,383, 228,491, 208,624, 410,489 and 224,141 nanoseconds, respectively. In the end, the overall average running time is 332,667 nanoseconds (see Figure 12).
Table 13. Running times for different initial iterative values by Algorithm 3 in Example 5.
Figure 12. Graphic demonstration for Example 5.
The iterative error analysis for the test point p = ( 12 , 20 ) under the same condition is presented in Table 14 with initial iterative points in the first row.
Table 14. The error analysis of the iteration process of Algorithm 3 in Example 5.
Example 6.
Assume a planar implicit curve Γ : f ( x , y ) = ( x 6 + 2 y 4 4 ) = 0 . One spatial test point p = ( 2.0 , 1.5 , 5 ) in this case is specified, and orthogonally project it onto plane x y , so the planar test point will be p = ( 2.0 , 1.5 ) . Using Algorithm 3, the corresponding orthogonal projection point on plane x y is p Γ = ( 1.1436111944138613 , 0.96895628133918197 ) , and it satisfies the two relationships f ( x n + 1 ) < 1.2 × 10 14 and p x n + 1 , t < 1.2 × 10 15 . In the iterative error Table 15, six points (1,1), (1.5,1.5), (−1,1), (1,−1), (1.5,1), (1,1.5) in the first row are the initial iterative points x 0 of Algorithm 3. In Figure 13, red, green and blue points are the spatial test point, planar test point and their common corresponding orthogonal projection point, respectively. Assume surface z = f ( x , y ) with two free variables x and y. The yellow curve is planar implicit curve f ( x , y ) = 0 .
Table 15. The error analysis of the iteration process of Algorithm 3 in Example 6.
Figure 13. Graphic demonstration for Example 6.
Remark 4.
In the 22 tables, all computations were done by using g++ in the Fedora Linux 8 environment. The iterative termination criteria ε 1 for Algorithm 1 and Algorithm 2 are ε 1 = 10 7 and ε 2 = 10 15 , respectively. Examples 1–6 are computed using a personal computer with Intel i7-4700 3.2-GHz CPU and 4.0 GB memory.
In Examples 2–6, if the degree of every planar implicit curve is more than five, it is difficult to get the intersection between the line segment determined by test point p and p + and the planar implicit curve by using the circle shrinking algorithm in [14]. The running time comparison for Algorithm in [14] was not done, and it was not done for the circle double-and-bisect algorithm in [36] due to the same reason. The running time comparison test by using the circle double-and-bisect algorithm in [36] has not been done because it is difficult to solve the intersection between the circle and the planar implicit curve by using the circle double-and-bisect algorithm. In addition, many methods (Newton’s method, the geometrically-motivated method [31,32], the osculating circle algorithm [33], the Bézer clipping method [25,26,27], etc.) cannot guarantee complete convergence for Examples 2–5. The running time comparison test for those methods in [25,26,27,31,32,33] has not been done yet. From Table 2 in [36], the circle shrinking algorithm in [14] is faster than the existing methods, while Algorithm 3 is faster than the circle shrinking algorithm in [14] in our Example 1. Then, Algorithm 3 is faster than the existing methods. Furthermore, Algorithm 3 is more robust and efficient than the existing methods.
Besides, it is not difficult to find that if test point p is close to the planar implicit curve and initial iterative point x 0 is close to the test point p, for a lower degree of and fewer terms in the planar implicit curve and lower precision of the iteration, Algorithm 3 will use less total average running time. Otherwise, Algorithm 3 will use more time.
Remark 5.
Algorithm 3 essentially makes an orthogonal projection of test point onto a planar implicit curve Γ : f ( x ) = 0 . For the multiple orthogonal points situation, the basic idea of the authors’ approach is as follows:
(1) 
Divide a planar region a , b × c , d of planar implicit curve into m 2 sub-regions a i , a i + 1 × c j , c j + 1 , i , j = 0 , 1 , 2 , , m 1 , where a = a 0 , a i + 1 a i = b a m , b = a m , c = c 0 , c j + 1 c j = d c m , d = c m .
(2) 
Randomly select an initial iterative value in each sub-region.
(3) 
Using Algorithm 3 and using each initial iterative value, do the iteration, respectively. Let us assume that the corresponding orthogonal projection points are p Γ i j , i , j = 0 , 1 , 2 , , m 1 , respectively.
(4) 
Compute the local minimum distances d i j , i , j = 0 , 1 , 2 , , m 1 , where d i j = p p Γ i j .
(5) 
Compute the global minimum distance d = p f ( p Γ ) = min { d i j } , i , j = 0 , 1 , 2 , , m 1 .
To find as many solutions as possible, a larger value of m is taken.
Remark 6.
In Example 1, for the test points (−0.1,1.0), (0.2,1.0), (0.1,0.1), (0.45,0.5), by using Algorithm 3, the corresponding orthogonal projection points p Γ are ( 0.47144354751227009 , 0.70879213227958752 ) , ( 0.42011639143389254 , 0.63408011508207950 ) , ( 0.33334322619432892 , 0.099785192603767206 ) , ( 0.34352305539212918 , 0.401230229163152532 ) , respectively (see Figure 14 and Table 16). In addition to the six test examples, many other examples have also been tested. According to these results, if test point p is close to the planar implicit curve f ( x ) , for different initial iterative values x 0 , which are also close to the corresponding orthogonal projection point p Γ , it can converge to the corresponding orthogonal projection point p Γ by using Algorithm 3, namely the test point p and its corresponding orthogonal projection point p Γ satisfy the inequality relationships:
f ( p Γ ) < 10 10 , ( p p Γ ) × f ( p Γ ) < 10 10 .
Figure 14. Graphic demonstration for the singular point case of Algorithm 3.
Table 16. Distance for the singular point case of Algorithm 3.
Thus, it illustrates that the convergence of Algorithm 3 is independent of the initial value and Algorithm 3 is efficient. In sum, the algorithm can meet the top two of the ten challenges proposed by Professor Les A. Piegl [41] in terms of robustness and efficiency.
Remark 7.
From the authors’ six test examples, Algorithm 3 is robust and efficient. If test point p is very far away from the planar implicit curve and the degree of the planar implicit curve is very high, Algorithm 3 also converges. However, inequality relationships (42) could not be satisfied simultaneously. In addition, if the planar implicit curve contains singular points, Algorithm 3 only works for test point p in a suitable position. Namely, for any initial iterative point x 0 , test point p can be orthogonally projected onto the planar implicit curve, but with a larger distance p p Γ than the minimum distance p p s between the test point and the orthogonal projection point, where p s is the singular point. For example, for the test point (1.0,0.01), (0.6,0.1), (0.5,−0.15), (0.8,−0.1), Algorithm 3 gives the corresponding orthogonal projection points p Γ as ( 0.66370473801453017 , 0.092784537693334545 ) , ( 0.66704812931370775 , 0.097528910436113817 ) , ( 0.663704738014530 , 0.13435089298485379 ) , ( 0.66418591136724639 , 0.090702201378858334 ) , respectively. However, the actual corresponding orthogonal projection point of four test points is ( 0.66666666666666667 , 0.0 ) (see Figure 14 and Table 16).
Remark 8.
This remark is added to numerically validate the convergence order of two, thanks to the reviewers’ insightful comments, which corrects the previous wrong calculation of the convergence order. The iterative error ratios for the test point p = ( 0.1 , 1.0 ) in Example 1 are presented in Table 17 with initial iterative points in the first row. The formula ln x n + 1 p Γ , x n + 1 p Γ x n p Γ , x n p Γ is used to compute error ratios for each iteration in rows other than the first one, which is the same for Table 18, Table 19, Table 20, Table 21 and Table 22. From the six tables, once again combined with the order of convergence formula ρ ln x n + 1 p Γ , x n + 1 p Γ / x n p Γ , x n p Γ ln x n p Γ , x n p Γ / x n 1 p Γ , x n 1 p Γ , it is not difficult to find out that the order of convergence for each example is approximately between one and two, which verifies Theorem 1. The convergence formula ρ comes from the Formula [42], i.e., ρ ln x n + 1 α / x n α ln x n α / x n 1 α .
Table 17. The error ratios for each iteration in Example 1 of Algorithm 3.
Table 18. The error ratios for each iteration in Example 2 of Algorithm 3.
Table 19. The error ratios for each iteration in Example 3 of Algorithm 3.
Table 20. The error ratios for each iteration in Example 4 of Algorithm 3.
Table 21. The error ratios for each iteration in Example 5 of Algorithm 3.
Table 22. The error ratios for each iteration in Example 6 of Algorithm 3.

6. Conclusions

This paper investigates the problem related to a point projection onto a planar implicit curve. The integrated hybrid second order algorithm is proposed, which is composed of two sub-algorithms (hybrid second order algorithm and initial iterative value estimation algorithm). For any test point p, any planar implicit curve containing singular points, whether test point p is close to or very far away from the planar implicit curve, the integrated hybrid second order algorithm could be convergent. It is proven that the convergence of Algorithm 3 is independent of the initial value. Convergence analysis of the integrated hybrid second order algorithm demonstrates that the convergence order is second order. Numerical examples illustrate that the algorithm is robust and efficient.

7. Future Work

For any initial iterative point and test point in any position of the plane, for any planar implicit curve (including containing singular points, the degree of the planar implicit curve being arbitrarily high), the future work is to construct a brand new algorithm to meet three requirements: (1) it does converge, and the orthogonal projection point does simultaneously satisfy three relationships of Formula (11); (2) it is very effective at tackling singularity; (3) it takes less time than the current Algorithm 3. Of course, it will be very challenging to find this kind of algorithm in the future.
Another potential topic for future research is to develop a more efficient method to compute the minimum distance between a point and a spatial implicit curve or a spatial implicit surface. The new method must satisfy three requirements in terms of convergence, effectiveness at tackling singularity and efficiency.

Author Contributions

The contributions of all of the authors were the same. All of them have worked together to develop the present manuscript.

Funding

This research was funded by [National Natural Science Foundation of China] grant number [71772106], [Scientific and Technology Foundation Funded Project of Guizhou Province] grant number [[2014]2093], [The Feature Key Laboratory for Regular Institutions of Higher Education of Guizhou Province] grant number [[2016]003], [Training Center for Network Security and Big Data Application of Guizhou Minzu University] grant number [20161113006], [Shandong Provincial Natural Science Foundation of China] grant number [ZR2016GM24], [Scientific and Technology Key Foundation of Taiyuan Institute of Technology] grant number [2016LZ02], [Fund of National Social Science] grant number [14XMZ001] and [Fund of the Chinese Ministry of Education] grant number [15JZD034].

Acknowledgments

We take the opportunity to thank the anonymous reviewers for their thoughtful and meaningful comments.

Conflicts of Interest

The authors declare no conflict of interest.

References

  1. Gomes, A.J.; Morgado, J.F.; Pereira, E.S. A BSP-based algorithm for dimensionally nonhomogeneous planar implicit curves with topological guarantees. ACM Trans. Graph. 2009, 28, 1–24. [Google Scholar] [CrossRef]
  2. Gabriel, T. Distance approximations for rasterizing implicit curves. ACM Trans. Graph. 1994, 13, 342. [Google Scholar]
  3. Gourmel, O.; Barthe, L.; Cani, M.P.; Wyvill, B.; Bernhardt, A.; Paulin, M.; Grasberger, H. A gradient-based implicit blend. ACM Trans. Graph. 2013, 32, 12. [Google Scholar] [CrossRef]
  4. Li, Q.; Tian, J. 2D piecewise algebraic splines for implicit modeling. ACM Trans. Graph. 2009, 28, 13. [Google Scholar] [CrossRef]
  5. Dinesh, M.; Demmel, J. Algorithms for intersecting parametric and algebraic curves I: Simple intersections. ACM Trans. Graph. 1994, 13, 73–100. [Google Scholar]
  6. Krishnan, S.; Manocha, D. An efficient surface intersection algorithm based on lower-dimensional formulation. ACM Trans. Graph. 1997, 16, 74–106. [Google Scholar] [CrossRef]
  7. Shene, C.-K.; John, K.J. On the lower degree intersections of two natural quadrics. ACM Trans. Graph. 1994, 13, 400–424. [Google Scholar] [CrossRef]
  8. Maxim, A.; Michael, B.; Gershon, E. Global solutions of well-constrained transcendental systems using expression trees and a single solution test. Comput. Aided Geom. Des. 2012, 29, 265–279. [Google Scholar]
  9. Sonia, L.R.; Juana, S.; Sendra, J.R. Bounding and estimating the Hausdorff distance between real space algebraic curves. Comput. Aided Geom. Des. 2014, 31, 182–198. [Google Scholar]
  10. Ron, G. Curvature formulas for implicit curves and surfaces. Comput. Aided Geom. Des. 2005, 22, 632–658. [Google Scholar]
  11. Thomas, W.S.; Zheng, J.; Klimaszewski, K.; Dokken, T. Approximate implicitization using monoid curves and surfaces. Graph. Mod. Image Proc. 1999, 61, 177–198. [Google Scholar]
  12. Eva, B.; Zbyněk, Š. Identifying and approximating monotonous segments of algebraic curves using support function representation. Comput. Aided Geom. Des. 2014, 31, 358–372. [Google Scholar]
  13. Anderson, I.J.; Cox, M.G.; Forbes, A.B.; Mason, J.C.; Turner, D.A. An Efficient and Robust Algorithm for Solving the Foot Point Problem. In Proceedings of the International Conference on Mathematical Methods for Curves and Surfaces II Lillehammer, Lillehammer, Norway, 3–8 July 1997; pp. 9–16. [Google Scholar]
  14. Martin, A.; Bert, J. Robust computation of foot points on implicitly defined curves. In Mathematical Methods for Curves and Surfaces: Tromsø; Nashboro Press: Brentwood, TN, USA, 2004; pp. 1–10. [Google Scholar]
  15. William, H.P.; Brian, P.F.; Teukolsky, S.A.; William, T.V. Numerical Recipes in C: The Art of Scientific Computing, 2nd ed.; Cambridge University Press: Cambridge, UK, 1992. [Google Scholar]
  16. Steve, S.; Sandford, L.; Ponce, J. Using geometric distance fits for 3-D object modeling and recognition. IEEE Trans. Pattern Anal. Mach. Intell. 1994, 16, 1183–1196. [Google Scholar]
  17. Morgan, A.P. Polynomial continuation and its relationship to the symbolic reduction of polynomial systems. In Symbolic and Numerical Computation for Artificial Intelligence; Academic Press: Cambridge, MA, USA, 1992; pp. 23–45. [Google Scholar]
  18. Layne, T.W.; Billups, S.C.; Morgan, A.P. Algorithm 652: HOMPACK: A suite of codes for globally convergent homotopy algorithms. ACM Trans. Math. Softw. 1987, 13, 281–310. [Google Scholar]
  19. Berthold, K.P.H. 1Relative orientation revisited. J. Opt. Soc. Am. A 1991, 8, 1630–1638. [Google Scholar]
  20. Dinesh, M.; Krishnan, S. Solving algebraic systems using matrix computations. ACM SIGSAM Bull. 1996, 30, 4–21. [Google Scholar]
  21. Chionh, E.-W. Base Points, Resultants, and the Implicit Representation of Rational Surfaces. Ph.D. Thesis, University of Waterloo, Waterloo, ON, Canada, 1990. [Google Scholar]
  22. De Montaudouin, Y.; Tiller, W. The Cayley method in computer aided geometric design. Comput. Aided Geom. Des. 1984, 1, 309–326. [Google Scholar] [CrossRef]
  23. Albert, A.A. Modern Higher Algebra; D.C. Heath and Company: New York, NY, USA, 1933. [Google Scholar]
  24. Thomas, W.; David, S.; Anderson, C.; Goldman, R.N. Implicit representation of parametric curves and surfaces. Comput. Vis. Graph. Image Proc. 1984, 28, 72–84. [Google Scholar]
  25. Nishita, T.; Sederberg, T.W.; Kakimoto, M. Ray tracing trimmed rational surface patches. ACM SIGGRAPH Comput. Graph. 1990, 24, 337–345. [Google Scholar] [CrossRef]
  26. Elber, G.; Kim, M.-S. Geometric Constraint Solver Using Multivariate Rational Spline Functions. In Proceedings of the 6th ACM Symposium on Solid Modeling and Applications, Ann Arbor, MI, USA, 4–8 June 2001; pp. 1–10. [Google Scholar]
  27. Sherbrooke, E.C.; Patrikalakis, N.M. Computation of the solutions of nonlinear polynomial systems. Comput. Aided Geom. Des. 1993, 10, 379–405. [Google Scholar] [CrossRef]
  28. Park, C.-H.; Elber, G.; Kim, K.-J.; Kim, G.Y.; Seong, J.K. A hybrid parallel solver for systems of multivariate polynomials using CPUs and GPUs. Comput. Aided Des. 2011, 43, 1360–1369. [Google Scholar] [CrossRef]
  29. Bartoň, M. Solving polynomial systems using no-root elimination blending schemes. Comput. Aided Des. 2011, 43, 1870–1878. [Google Scholar]
  30. Van Sosin, B.; Elber, G. Solving piecewise polynomial constraint systems with decomposition and a subdivision-based solver. Comput. Aided Des. 2017, 90, 37–47. [Google Scholar] [CrossRef]
  31. Hartmann, E. The normal form of a planar curve and its application to curve design. In Mathematical Methods for Curves and Surfaces II; Vanderbilt University Press: Nashville, TN, USA, 1997; pp. 237–244. [Google Scholar]
  32. Hartmann, E. On the curvature of curves and surfaces defined by normal forms. Comput. Aided Geom. Des. 1999, 16, 355–376. [Google Scholar] [CrossRef]
  33. Nicholas, J.R. Implicit polynomials, orthogonal distance regression, and the closest point on a curve. IEEE Trans. Pattern Anal. Mach. Intell. 2000, 22, 191–199. [Google Scholar]
  34. Hu, S.-M.; Wallner, J. A second order algorithm for orthogonal projection onto curves and surfaces. Comput. Aided Geom. Des. 2005, 22, 251–260. [Google Scholar] [CrossRef]
  35. Li, X.; Wang, L.; Wu, Z.; Hou, L.; Liang, J.; Li, Q. Convergence analysis on a second order algorithm for orthogonal projection onto curves. Symmetry 2017, 9, 210. [Google Scholar] [CrossRef]
  36. Hu, M.; Zhou, Y.; Li, X. Robust and accurate computation of geometric distance for Lipschitz continuous implicit curves. Vis. Comput. 2017, 33, 937–947. [Google Scholar] [CrossRef]
  37. Chen, X.-D.; Yong, J.-H.; Wang, G.; Paul, J.C.; Xu, G. Computing the minimum distance between a point and a NURBS curve. Comput. Aided Des. 2008, 40, 1051–1054. [Google Scholar] [CrossRef]
  38. Chen, X.-D.; Xu, G.; Yong, J.-H.; Wang, G.; Paul, J.C. Computing the minimum distance between a point and a clamped B-spline surface. Graph. Mod. 2009, 71, 107–112. [Google Scholar] [CrossRef]
  39. Hoschek, J.; Lasser, D.; Schumaker, L.L. Fundamentals of Computer Aided Geometric Design; A. K. Peters, Ltd.: Natick, MA, USA, 1993. [Google Scholar]
  40. Hu, S.; Sun, J.; Jin, T.; Wang, G. Computing the parameter of points on NURBS curves and surfaces via moving affine frame method. J. Softw. 2000, 11, 49–53. (In Chinese) [Google Scholar]
  41. Piegl, L.A. Ten challenges in computer-aided design. Comput. Aided Des. 2005, 37, 461–470. [Google Scholar] [CrossRef]
  42. Weerakoon, S.; Fernando, T.G.I. A variant of Newton’s method with accelerated third-order convergence. Appl. Math. Lett. 2000, 13, 87–93. [Google Scholar] [CrossRef]

Article Metrics

Citations

Article Access Statistics

Multiple requests from the same IP address are counted as one view.