A New Objective Function for the Recovery of Gielis Curves

: The superformula generates curves called Gielis curves, which depend on a small number of input parameters. Recovering parameters generating a curve that adapts to a set of points is a non-trivial task, thus methods to accomplish it are still being developed. These curves can represent a great variety of forms, such as living organisms, objects and geometric shapes. In this work we propose a method that uses a genetic algorithm to minimize a combination of three objectives functions: Euclidean distances from the sample points to the curve, from the curve to the sample points and the curve length. Curves generated with the parameters obtained by this method adjust better to real curves in relation to the state of art, according to observational and numeric comparisons.


Introduction
The importance of detecting and representing shapes is recognized by several research communities such as engineering [1,2] and computer science community [3]. In [2], superquadric shapes (such as cuboids and lozenges) have been numerically studied. Gielis curves can describe shapes of nature and shapes made by man, including geometric figures [4]. This makes its use possible in areas such as botany and it is even used for road signal recognition [5]. In other works [6,7], an infinite plasmonic cylinder (whose cross-section is described by the Gielis superformula) is optimized.
Duda and Hart defined pattern recognition as "the field concerned with machine recognition of meaning regularities in noisy of complex environments" [8], this includes shape recovery. One of the most common ways to represent shapes is with the use of superquadrics [9,10]. Superellipses represent shapes of two dimensions, but they have the disadvantage of limited symmetry. Shapes represented by superellipses necessarily have two symmetry axis, i.e., four sections. As a result of this, many shapes of nature and objects can not be represented with them.
With the aim to overcome the mentioned inconveniences, Johan Gielis proposed the superformula [4] that generates curves called Gielis curves. The superformula allows obtaining a greater degree of freedom, being able to represent organisms of nature, objects and geometric figures using a small amount of input parameters. In exchange for getting a greater degree of freedom, it is more difficult to perform the shape recovery.
Gielis curves can be classified as self-intersecting and non-self-intersecting. Fougerolle et al. [11] and Gielis et al. [12] proposed methods for the recovery of Gielis curves from a set of sample points. Some experiments on fitting of Gielis curves by simulated annealing and particle swarm methods of global optimization were made by Sudhanshu K. Mishra [13].
We propose a new objective function based on the method proposed by Fougerolle et al. [11], focusing on non-self-intersecting Gielis curves, without noise and without translation or rotation, minimizing a function that combines the distance from the sample points to the curve, the distance from the sample points to points of the generated curve and the curve's total length. In addition to visual evaluation, we propose a metric for numeric comparison of the obtained results.
The rest of the paper is divided as follows: in Section 2, Gielis curves are presented, that are calculated using the superformula, which is a generalization of the superellipse [4]. In Section 3, the background to the problem is presented. Then, in Section 4, the proposed method is presented. Experiments and obtained results are presented in Section 5 and conclusions and future work in Section 6.

Gielis Curves
As mentioned before, Gielis curves are introduced for representing shapes with variable symmetry such as those in nature and objects. Gielis curves are a generalization of the superellipses, which are defined by the formula: x a n + y b n = 1, where x is the axis of abscissas, y is the axis of ordinates, a ∈ R + is the maximum value of x, b ∈ R + is the maximum value of y and n ∈ R + − {0} define the shape of the curve. Standard ellipses can be generated with n = 2. For n < 2, curves are inscribed in that ellipse, i.e., they fit inside it, and the curve is called hypoellipse. For n > 2, curves are circumscribed about that ellipse, i.e., that ellipse fits inside the curve, and the curve is called hyperellipse. In the limit case where n is infinite, a rectangle is obtained [14]. Figure 1 shows examples of superellipses. The first one is a hyperellipse, the second one is a circumference and the third one is a hypoellipse.  (1) using n = 2, a = b = 1). The first curve is circumscribed by the circle, the second curve is a circle and the third curve is inscribed in the circle. Values of a, b and n from Equation (1) are shown, respectively. Specific rotational symmetries are introduced with the use of polar coordinates ρ = f (θ), where ρ is the distance to the origin or pole and θ is the angle that forms with the polar axis, equivalent to the x axis in the cartesian system; and adding the argument m 4 to the θ angle. In addition, exponents n 1 , n 2 and n 3 are introduced, replacing exponent n used in Equation (1). This leads to the function in Equation (2), which is called superformula [4]: where m ∈ R + − {0} is the value of rotational symmetry; a ∈ R + is the maximum length in the polar axis; b ∈ R + is the maximum length in the perpendicular to the polar axis and n 1 , n 2 , n 3 ∈ R + − {0} determine the shape of the Gielis curve. The argument m indicates the amount of points that are fixated in the ellipse of semi axis a and b, that is centered in the origin of coordinates. As occurs with superellipses, for n 2 = n 3 < 2 the Gielis curve is inscribed in that ellipse, and for n 2 = n 3 > 2 the Gielis curve is circumscribed about that ellipse. Original superellipses, given by Equation (1), can be generated by Equation (2).
We may find complex shapes that are very different to view, but that have in common the property of being able to be represented by the superformula. These shapes can be organisms, objects and geometric figures. Figure 2 shows examples of Gielis curves generated by Equation (2) that can not be represented using Equation (1).
Parameter m can be viewed as p/q, where p ∈ N + − {0} is the number of sectors of the Gielis curve (symmetry) and q ∈ R + − {0} is the maximum number of intersections of the Gielis curve with itself, where p and q are relative primes [11]. The generated Gielis curve is repeated every 2π for integer values of m and every 2qπ for rational values of m, i.e., only data generated by the function (2) within the [0, 2qπ] interval is analyzed.
Unfortunately, methods for recovery of superllipses are not directly applicable to Gielis curves [11]. For this reason, effective methods to perform recovery of Gielis curves are still developed. This recovery can be viewed as a optimization problem, where from sample points we minimize some objective functions under certain restrictions.
Optimization problems can be solved with the use of deterministic and stochastic algorithms. Deterministic algorithms follow a rigorous process and their execution is reproducible, with the risk of falling into local minima when the problem structure is non-convex. To overcome this drawback, stochastic algorithms always have some degree of randomness [15]. Genetic algorithms are inspired on the evolution of populations and they provide a natural and intrinsic way to explore the solution space [16,17]. Each individual in the population represents a problem solution. Selection and recombination are genetic operators performed on the current population to generate a new promising population. To avoid premature stagnation, mutation operators inject new genetic information into the population. The development of promising genetic strategies to achieve balance between exploitation and exploration in the solution space has led to the successful application of modern genetic algorithms in several highly complex problems [18,19]. In the next section the application of this type of algorithm for solution searching is presented.

Background
In this section we present evolutionary algorithms and the method proposed in [11] for the recovery of Gielis curves from a set of points. This method is based on the algorithm proposed by Bokhabrine et al. [20].
Evolutionary algorithms are stochastic search methods, inspired by biologic evolution, and they have been applied successfully in several pattern recognition problems [21][22][23]. Each individual from the population represents a solution, and each individual has an associated fitness that measures its adaptation to the environment, i.e., the quality of the solution. Individuals are composed by a set of variables, called genes. Operators applied in evolutionary algorithms are based on mechanisms of biological evolution, such as selection, mating or crossover and mutation. The selection process is the one that determines which individuals will be used to create new generations, i.e., perform crossover. The crossover operator is applied between the selected individuals, the parents. This operator produces descent individuals, that are obtained by the combination of the genes from the parents. The mutation operator produces random changes in the individuals, aiming to achieve greater exploration of the solution space and avoid local convergences, which are the main inconvenient of deterministic algorithms. The mutation rate is a value that must be chosen carefully, since a low value can cause the algorithm to converge to local minima, while a high value can cause the algorithm to behave similar to a random search [11].
Fougerolle et al. [11] proposed a method for the recovery of Gielis curves, using a genetic algorithm as follows: • Each gene represents a parameter from the superformula, i.e., each gene may be represented by a real number, except p and q, which are restricted to integer numbers. • Crossover is performed as follows: given two parent solutions, two children solutions are obtained, where each child inherits each gene from one of the parents with a probability of 0.5, using a α value between 0 and 1. Since p and q must be relative primes, in case they are taken from different parents, the fulfillment of this condition is verified. If it is not fulfilled, both genes are taken from the same parent, in order for this condition to be fulfilled. Figure 3 shows crossover method: in Figure 3a an example in which the crossover produces a child in which p and q are not relative primes (4 and 2, respectively), and in Figure 3b the correction applied in order for the condition to be fulfilled. For each gene, a random value α between 0 and 1 is obtained. If α ≤ 0.5 the first child takes the gene from the first father (in blue) and the second child takes the gene from the second father (in green). If α > 0.5 the first child takes the gene from the second father (in green) and the second child takes the gene from the first father (in blue). (a) Shows crossover in which the condition that p and q should be relative primes is not fulfilled. (b) Shows the correction applied in order to fulfill the condition.
• In order to perform mutation on continuous variables (a, b, n 1 , n 2 , n 3 ), a normal distribution is used as follows: where N is the normal distribution, g i is gene i, s i is the interval length of the variable represented by gene g i and k is a user defined parameter, the suggested value is (0.1). Then, mutation is performed with a normal distribution with a standard deviation of a tenth of the interval length of the variable. • Discrete variables (p and q in this case) indicate the number of sectors of the Gielis curve and the maximum number of self-intersections, respectively. When performing mutation, small variations of these variables have no reason for being, since they produce big changes in the adaptation to the sample points. This can be observed in Figure 4, where both Gielis curves have the same parameters, except p, but they have completely different shapes. For this reason, it is suggested to simply generate random values, with the condition that p and q must be relative prime. • The selection process is performed as follows: the size of the population is kept constant and individuals with best adaptive value remain. In order to face local convergence and premature convergence issues, Fougerolle et al. [11] suggested to measure the standard deviation of the population after each iteration, and if it is less than a user defined threshold, σ min , the population is re-initialized randomly. This is computed with normalized values. The norm used is defined according to Equation (4) where c ∈ F is the value of the variable to be normalized, F = {a, b, p, q, n 1 , n 2 , n 3 } is the set of parameters from the superformula, c min is the lower limit of the interval of the variable, c max is the upper limit of the interval of the variable and v is the normalized value. • If the standard deviation of all variables is less than the defined threshold, population is re-initialized randomly.
The behavior of the population is as follows: as the generation number increases, average error of the population decreases, despite that there can be small perturbations due to the mutation operator, crossover operator is dominant. Once the population converges (the standard deviation is less than the defined threshold), the best solution from the population is stored if this solution is better than the current "global best", and a random re-initialization of the population is performed, i.e., the best solution so far is always stored, independently from the re-initializations performed. We explore the solution space by this procedure, without losing the best solution obtained so far.
Tests are performed to determine the impact from the mutation and crossover operators using two strategies: the first strategy was to modify the mutation parameter k, the mutation rate and the standard deviation threshold for population re-initialization; and increasing the number of iterations for greater exploration. Applying this strategy, the standard deviation of the population is high and the algorithm does not converge. The second strategy consists in increasing the size of the population by 10, 50 and 100 times. The results are similar, so the conclusion is that performing re-initialization of a small population is more effective than trying to perform a search around local minima or handle big populations [11].
In order to determine the inclusion of a point in respect to a given curve, the use of Shortest Euclidean Distance (SED) between the sample points and the curve is suggested by Fougerolle et al. [11]. Figure 5 shows an example of this SED. In order to compute the SED, sample points from the curve are obtained; then for each point, it is selected the point from the curve with minimum distance. This is given by Equation (5), denoted as where Q = (ρ, θ) is a sample point in polar coordinates, where ρ is the distance to the origin and θ is the angle formed with the polar axis, F = [a, b, p, q, n 1 , n 2 , n 3 ] is the set of parameters from the superformula, R i = (t, ω) is each one of the sample points from the curve generated with F, in polar coordinates, where t is the distance to the origin and ω is the angle formed with the polar axis and i ∈ {1, 2, ..., r}, where r is the amount of sample points from the Gielis curve generated with F and d is the Euclidean distance between two points. Using the SED, we define an objective function to be minimized, as observed in Equation (6); denoted as where P j is each one of the sample points and e is the amount of sample points. The function given by Equation (6) is not good enough for obtaining satisfactory results [11]. An example can be seen in Figure 6, where the generated Gielis curve has segments that approach the original curve. The generated Gielis curve also has segments that self-intersect and do not correspond to the original curve.
Fougerolle et al. [11] suggested to also minimize the length of the generated Gielis curve, given by Equation (7), denoted as where [0, 2qπ] is the interval of ρ(θ), ρ(θ) is the Gielis curve and dθ is the derivative of ρ(θ). In this work, the approximate length of the Gielis curve is calculated easily, dividing the Gielis curve in small sections and computing the distance between the start and finish points of each sections. The calculation uses polar coordinates and sums the computed distances to obtain a total length according to function (8); denoted as where (ρ i , θ i ), (ρ i+1 , θ i+1 ), the points between distances, are calculated and o is the amount of points used for computing distances. Then, the function from Equation (8) is combined with the function from Equation (6) using an R-function. An R-function, proposed by V.L. Rvachev [24], is a real function distinguished because one of its properties is fully determined by that same property of its arguments. For example, the sign of a real function is fully determined by the sign of its arguments [25,26]. In Table 1 possible signs of the function f (x, y) = x + y + x 2 + y 2 , are in function of the sign of its arguments. This function is the analytical equivalent to boolean union, considering negative values as false, and positive values as true [25]. On the contrary, f 1 (x, y) = x + y is not an R-function since its sign depends on the magnitude of its parameters and not just its signs.
Once the combination of Equations (6) and (8) is performed using the mentioned R-function, the objective function from equation [11] is obtained Even with the use of this objective function there are conflictive cases, as can be observed on Figure 7, in which the original curve is not self-intersecting, but the recovered Gielis curve is. Aiming to overcome these drawbacks, this work proposes a variation of the objective function given by Equation (9), which will be explained in the next section. We also propose a numerical evaluation for comparing different methods. Such evaluation complements visual comparison, which will be explained in Section 5.

Proposal
This work aims to improve the limits of the objective function in Equation (9) introduced in the previous section.
The SED seeks points from the generated Gielis curve, such that they are close to sample points. Therefore we propose a similar metric with the same goal, but with less computational cost. This metric consists in computing the Euclidean distance d(P j , S j ) between the sample point P j = (ρ j , θ j ) and the resulting point from the generated Gielis curve S j = (τ j , θ j ) on angle θ j , instead of computing SED. The inclusion of the point to the original curve is still evaluated, but avoiding the iterative method for computing the SED, since the suggested distance is computed directly. Figure 8 shows the difference between SED and d(P j , S j ). In this case d(P j , S j ) measures differently the approximation of the Gielis curve to the sample points. SED can be in any direction but d(P j , S j ) is always in the same direction than the line between P j and the polar axis. SED can be replaced by the distance in Equation (6), obtaining where P is the set of sample points, S is the set of sample points from the generated Gielis curve, e is the amount of sample points and d is the Euclidean distance. The length of the Gielis curve is not related to the sample points, as observed in Equations (7) and (8), whose input parameters are solely the superformula parameters and not the sample points. As we need a metric that penalizes the segments from the generated Gielis curve that are far from the sample points, we suggest a new metric instead of the length. The computing of this metric is as follows: • Obtain a finite amount of points from the generated Gielis curve, to equals intervals of θ, with a sampling. An example of this is observed in Figure 9, with the points in red. • For each point R i = (r i , ω i ) find the sample point that is closest and compute its Euclidean distance. This distance is observed in green in Figure 9.
The new distance will be called d2 and is given by Equation (11), denoted as where R i = (r i , ω i ) is a sample point from the generated Gielis curve, P is the set of sample points and d is the Euclidean distance between two points. Figure 9 shows an example of d2 and we observe that d2 penalizes points from the generated Gielis curve that are distant from the sample points. Figure 9. Distance d2 given by Equation (11) is shown in green. The sample points are in blue and the points corresponding to the generated curve are in red.
The function from Equation (12) is built with the sum of squares of d2: where R is the set of sample points from the generated Gielis curve and r is the cardinality of R.
In order to combine functions from Equations (10) and (12), we use the same R-function than in function from Equation (9). Then, we obtain the following objective function to be minimized: where S and R are sets of sample points from the Gielis curve generated with F. Figure 10 shows a case in which this objective function presents a better result than F1. The method that uses F1 generates a Gielis curve that is self-intersecting near the polar axis, while the method that uses F2 is not self-intersecting and because of this it presents a better adjustment to the original curve.
After the tests performed using Equation (13), drawbacks with this method are also observed. If length is not minimized, there are cases in which the solution spins around the original curve. Figure 11 shows examples of this drawback, the segments corresponding to the generated Gielis curves are not distant from the original curves, but spin around them.
Tests performed with this method and observed in Figure 11 show the importance of minimizing the length of the Gielis curve, aiming to avoid the repetition of segments in the obtained Gielis curves. Then, we include the length of the Gielis curve in the function given by Equation (13), and we obtain the following objective function.
Function from Equation (14) allows to overcome the drawbacks mentioned with the previous methods and represents the main contribution of this work. Figure 12 shows the recovery of the same Gielis curves but using the objective function F3, thus avoiding the repetition of segments in the obtained Gielis curves. In the next section we describe the tests performed with the proposed objective functions and the results obtained.  (9) and at the right is the result of minimizing function F2 given by Equation (13). Figure 11. Examples of conflicting cases using F2, given by Equation (13). In blue are the original curves and in orange are the solution curves.  Figure 11 but recovered using F3 given by Equation (14). In blue are the original curves and in orange are the solution curves.

Experimental Tests
In this section we present the metrics that are used to compare the aforementioned methods and the results obtained, in order to determine which method provides a better adaptation to the sample points. As input data, we used points obtained by means of a sampling from Gielis curves with known parameters, to equal intervals of θ. For each θ value, its corresponding ρ is computed, obtaining the desired amount of sample points. Next we describe the metrics used, the algorithm parameters and the results obtained.

Evaluation Metrics
Once the sample points that will be used as input data are obtained, the algorithm is executed with each method, five times per method. We take the best solution among executions, and three Gielis curves are obtained, one for each method.
Aiming to evaluate and compare the quality of the solutions, it is important an appropriate metric for evaluating the grade of similarity between two curves. This metric will be calculated between the original curve and each of the Gielis curves, allowing to compare the quality of the solutions obtained with each method. The metric is applied as follows: • Considering that the tests were performed using Gielis curves with known parameters, an amount h of sample points in the original curve is obtained, to equal intervals of θ. These sample points are obtained by dividing the interval [0, 2qπ] in h equal parts. Figure 13a shows an example where h = 4, and since q = 1, the values of θ are {π/2, π, 3π/2, 2π}. There is in blue the original curve and in black the points obtained. • For each solution (one for each method), an amount h of sample points is obtained, where values of θ are the same used for the original curve. Figure 13b shows an example, corresponding to the Gielis curve obtained using objective function F3 in Equation (14). The Gielis curve is in red and the sample points are in black. • Considering a h-dimensional space, finite representation of the original curve and the solutions obtained with each method are built, with polar distances (ρ 1 , ρ 2 , ..., ρ h ) for each of them. We denote A 0 as the original curve, A 1 as the Gielis curve from the method proposed by Fougerolle et al., A 2 as the curve from the method that uses objective function F2 from Equation (13) and A 3 for the method that uses our contribution, the objective function F3 from Equation (14). In the example from Figure 13, there are A 0 = {ρ 1 , ρ 2 , ρ 3 , ρ 4 } and A 3 = ρ 1 , ρ 2 , ρ 3 , ρ 4 . • Finally, the Euclidean distance between A 0 and each h-dimensional point corresponding to each solution is calculated. The distance between A 0 and A 1 will be called M1; between A 0 and A 2 , M2; and between A 0 and A 3 , M3. The lower the value of the distance, the better the quality of the solution. Continuing with the example given in Figure 13, Figure 14 shows M3, that is equal to the root square of the sum of squares of the length of the segments in green.
The generalization of the metric for h dimensions is given by the Equation (15), denoted as where A 0 is the set of polar distances from the original curve, A is the set of polar distances from the generated Gielis curve, h is the amount of sample points used to calculate the metric, ρ i is each of the polar distances from the generated Gielis curve and ρ i is each of the polar distances from the original curve. Despite comparing the similarity between the original curve and the Gielis curves, this metric does not penalize the cases in which the Gielis curve spins around the original curve. This is due to the fact that the computation is performed in the interval corresponding to the original curve [0, 2q orig π], where q orig is the value of q corresponding to the original curve. Thus if q sol , the parameter q of the Gielis curve, is bigger than q orig , segments that are in the interval [2q orig π, 2q sol π] are not penalized. Figure 15a shows in blue the original curve and in red the Gielis curve recovered using Equation (13). Figure 15b shows in blue the same original curve and in red the Gielis curve recovered using Equation (14). Visually we note that the second one is a better solution. On the contrary, according to the metric, the Gielis curve recovered using Equation (13) represents a better solution since M2 is less than M3 (M2 = 34.252844 and M3 = 50.560416). This is due to the fact that the generated Gielis curve in Figure 15a has a value q sol of 5 while the original curve has a value q orig if 1. Then, M2 only takes 1/5 of the outline of the generated Gielis curve using Equation (13) into account. In the case of the generated Gielis curve in Figure 15b, q sol has the same value as q orig , thus M3 takes the whole solution curve into account.
For this reason, in addition to the numeric metric presented, we still perform a visual comparison to determine the quality of the solutions.   The intervals of the parameters of the superformula are a, b ∈ [0.01, 100]; n 1 , n 2 , n 3 ∈ [0.01, 1000]; p, q ∈ [1,11].

Parameters Used
We observed the behavior of the population using F3 with these same parameters. Figure 16 shows this behavior. The global best is in red and the bests element from each generation are in blue. We observe peaks in blue due to re-initialization of the population and we also observe in red how re-initialization favors a improvement in the fitness of the global best. After observing the behavior of the population of solutions, we decided to maintain the same parameters.
For the performed tests, the amount e of sample points is 1000. Since this work also proposes a metric for curve comparison, the amount r for computing the norms is 500,000, in order to keep the distance between the points close enough for not skipping segments from the curve. Figure 16. Example of the behavior of the population using F3. The global best is in red and the best elements from each generation are in blue.

Results
The Gielis curves in Figure 2 are considered for these experiments. Table A1 (see in Appendix A) shows these Gielis curves and the results obtained with each method. The first column shows the row number; the second shows the parameters of the original Gielis curve from which the sample points are obtained; and the third, fourth and fifth show the parameters recovered using F1, F2 and F3, respectively. The parameters that offer the best solution according to the proposed metric are shown in bold. Table A2 (see in Appendix A) compares the three methods. The smaller metric values are shown in bold for each Gielis curve. The metric M3, corresponding to the objective F3, offers better solutions for the 60% of the cases, according to the applied metric. Figure 17 shows the percentages of better solutions provided by each method, according to the applied metric. The last column of Table A2 (see in Appendix A) shows which objective function(s) provide(s) the best solution(s) according to visual comparison. We observe that in 60% of the cases, the solutions provided by F2 and F3 are of similar quality, better than F1, and in 22.9% of the cases F3 is better than F2 and F1. These cases are those in which the solutions spin around the original curves, i.e., q sol is bigger than q orig . This gives a total of 82.9% of the cases in which the solution provided by F3 is visually satisfactory. Figure 18 shows some representative examples. Each row corresponds to an original curve. The first column shows the solution obtained using the method by Fougerolle et al., the second column shows the solution obtained by F2, and the third column shows the solution obtained by F3. Rows 1, 3 and 4 show examples in which the solutions provided by F2 and F3 are similar; and row 2 shows an example in which the solution provided by F2 spins around the original curve, and the solution provided by F3 is better. It should also be noted that the solutions provided by F3 fail to capture the corner of the shapes. This is because the length of the generated curve is included in the objective function. Figure 19 shows examples of exceptional cases. The first row corresponds to Gielis curve 33 from Table A2 (see in Appendix A): a case in which neither F2 nor F3 offer a solution of good visual quality, while F1 provides a good general adjustment but has intersections that do not adjust to the original curve. Despite metric values, the better solution is provided by F2. The second row corresponds to Gielis curve 16 from Table A2 (see in Appendix A): a case in which all the methods tested provide solutions of good visual quality. According to the metric, F3 provides the better solution. The third row corresponds to the Gielis curve 20 from Table A2 (see in Appendix A): a case in which none of the methods offers a visually satisfying solution, although F1 is visualized as the better solution, which is confirmed by the metric. The fourth row corresponds to Gielis curve 22 from Table A2 (see in Appendix A): a case similar to the previous one, where F1 is observed as visually better, but according to the metric F3 offers the better solution.

Conclusions and Future Work
New objective functions for the recovery of Gielis curves from a set of sample points are proposed, based on a genetic algorithm proposed in the literature, which minimizes an objective function that combines Euclidean distances from the sample points to the curve, and the length of the recovered Gielis curve. The first proposed method minimizes an objective function that combines the Euclidean distances from the sample points to a curve, and from the curve to the sample points. The second proposed method minimizes another objective function, that in addition to the mentioned distances, includes the length of the recovered Gielis curve.
A visual comparison of the solutions by these methods was performed. It is observed that in 60% of the cases, the solutions provided by the proposed methods have similar quality, these being better than the ones found in the literature. In 22.9% of the cases, the second proposed method offers better solutions than the first proposed method and the literature. This totals 82.9% of cases in which the second proposed method offers better solutions than the literature.
For an objective comparison, a numeric metric for similarity between two curves has been proposed. This metric consists of obtaining sample points from both curves, building vectors with them and computing the Euclidean distance between these vectors. A lower distance implies more similarity between curves. The comparison between solutions of each method and the original curve is performed using this metric. According to the metric, we observed that the method proposed in the literature offers better solutions in only 2.9% of the cases, the first proposed method in 37.1% of the cases and the second proposed method in 60% of the cases. Although the new objective function proposed in the paper cannot faithfully represent simple curves such as squares or pentagons, it was shown to be more efficient on more complex curves compared to the method of Fougerolle et al. [11].
As future work we propose searching for other objective functions that perform well using partial data, i.e., not using all the sample points as input data: and/or with noise, i.e., with small displacements in the sample points, as done in [12]. We also propose including rotation and translation parameters to the Gielis curves respect to the origin of coordinates, in order to analyze the behavior of the objective functions with a greater amount of parameters to recover. The proposed objective function could be useful also for other metaheuristic algorithms such as simulated annealing and particle swarm. In addition, the proposed objective function can be complemented for better corner fit.