# Neural-Network-Based Curve Fitting Using Totally Positive Rational Bases

^{1}

^{2}

^{*}

^{†}

^{‡}

^{§}

## Abstract

**:**

## 1. Introduction

## 2. Shape-Preserving and Rational Bases

**Proposition**

**1.**

## 3. Curve Fitting with Neural Networks

Algorithm 1: The AdaMax algorithm [13] adapted to our context. |

## 4. Experiments Results

**Remark**

**1.**

#### 4.1. Circle Curve

#### 4.2. Cycloid Curve

#### 4.3. Archimedean Spiral Curve

#### 4.4. Comparison of Least-Squares Fitting and the Neural Network ${\mathcal{N}}_{w,P}$

`SVD`and, secondly, by using the Matlab command

`mldivide`.

## 5. Conlusions and Future Work

## Author Contributions

## Funding

## Conflicts of Interest

## References

- Iglesias, A.; Gálvez, A.; Avila, A. Discrete Bézier Curve Fitting with Artificial Immune Systems. In Intelligent Computer Graphics 2012; Springer: Berlin/Heidelberg, Germany, 2013; pp. 59–75. [Google Scholar] [CrossRef]
- Hoffmann, M.; Várady, L. Free-form Surfaces for scattered data by neural networks. J. Geom. Graph.
**1998**, 2, 1–6. [Google Scholar] - Delgado, J.; Peña, J.M. A Comparison of Different Progressive Iteration Approximation Methods. In Mathematical Methods for Curves and Surfaces; Dæhlen, M., Floater, M., Lyche, T., Merrien, J.L., Mørken, K., Schumaker, L.L., Eds.; Springer: Berlin/Heidelberg, Germany, 2010; pp. 136–152. [Google Scholar]
- Ando, T. Totally positive matrices. Linear Algebra Appl.
**1987**, 90, 165–219. [Google Scholar] [CrossRef] [Green Version] - Peña, J. Shape preserving representations for trigonometric polynomial curves. Comput. Aided Geom. Des.
**1997**, 14, 5–11. [Google Scholar] [CrossRef] - Carnicer, J.; Peña, J. Totally positive bases for shape preserving curve design and optimality of B-splines. Comput. Aided Geom. Des.
**1994**, 11, 633–654. [Google Scholar] [CrossRef] - Carnicer, J.; Peña, J. Shape preserving representations and optimality of the Bernstein basis. Adv. Comput. Math
**1993**, 1, 173–196. [Google Scholar] [CrossRef] - Mainar, E.; Peña, J.; Rubio, B. Evaluation and subdivision algorithms for general classes of totally positive rational bases. Comput. Aided Geom. Des.
**2020**, 81, 101900. [Google Scholar] [CrossRef] - Farin, G. Curves and Surfaces for Computer Aided Geometric Design (Fifth Ed.): A Practical Guide; Academic Press Professional Inc.: Cambridge, MA, USA, 2002. [Google Scholar]
- Iglesias, A.; Gálvez, A.; Collantes, M. Global-support rational curve method for data approximation with bat algorithm. In Proceedings of the IFIP International Conference on Artificial Intelligence Applications and Innovations, Bayonne, France, 14–17 September 2015; Springer: Berlin/Heidelberg, Germany, 2015; pp. 191–205. [Google Scholar]
- Mehnen, J.; Weinert, K. Discrete NURBS-Surface Approximation using an Evolutionary Strategy. Reihe Comput. Intell.
**2001**, 87. [Google Scholar] [CrossRef] - Van To, T.; Kositviwat, T. Using Rational B-Spline Neural Networks for Curve Approximation. In Proceedings of the 7th WSEAS International Conference on Mathematical Methods and Computational Techniques in Electrical Engineering, World Scientific and Engineering Academy and Society (WSEAS), Stevens Point, WI, USA, 7–9 November 2009; MMACTE’05. pp. 42–50. [Google Scholar]
- Kingma, D.P.; Ba, J. Adam: A Method for Stochastic Optimization. arXiv
**2014**, arXiv:cs.LG/1412.6980. [Google Scholar] - Manni, C.; Pelosi, F.; Lucia Sampoli, M. Generalized B-splines as a tool in isogeometric analysis. Comput. Methods Appl. Mech. Eng.
**2011**, 200, 867–881. [Google Scholar] [CrossRef] - Sánchez-Reyes, J. Harmonic rational Bézier curves, p-Bézier curves and trigonometric polynomials. Comput. Aided Geom. Des.
**1998**, 15, 909–923. [Google Scholar] [CrossRef] - Mohri, M.; Rostamizadeh, A.; Talwalkar, A. Foundations of Machine Learning; The MIT Press: Cambridge, MA, USA, 2012. [Google Scholar]
- Abadi, M.; Agarwal, A.; Barham, P.; Brevdo, E.; Chen, Z.; Citro, C.; Corrado, G.S.; Davis, A.; Dean, J.; Devin, M.; et al. TensorFlow: Large-Scale Machine Learning on Heterogeneous Systems. arXiv
**2015**, arXiv:cs.DC/1603.04467. [Google Scholar] - Carnicer, J.; Mainar, E.; Peña, J. Representing circles with five control points. Comput. Aided Geom. Des.
**2003**, 20, 501–511. [Google Scholar] [CrossRef] - Carnicer, J.; Mainar, E.; Peña, J. A totally positive basis for circle approximations. Rev. Real Acad. Cienc. Exactas Fis. Nat. Ser. A Mat.
**2019**, 113. [Google Scholar] [CrossRef] - Díaz Pérez, L.; Rubio Serrano, B.; Albajez García, J.; Yague Fabra, J.; Mainar Maza, E.; Torralba Gracia, M. Trajectory Definition with High Relative Accuracy (HRA) by Parametric Representation of Curves in Nano-Positioning Systems. Micromachines
**2019**, 10, 597. [Google Scholar] [CrossRef] [PubMed] [Green Version] - Marco, A.; Martínez, J.J. Polynomial least squares fitting in the Bernstein basis. Linear Algebra Its Appl.
**2010**, 433, 1254–1264. [Google Scholar] [CrossRef] [Green Version] - Mainar, E.; Peña, J.; Rubio, B. Accurate least squares fitting with a general class of shape preserving bases. In Proceedings of the Fifteenth International Conference Zaragoza-Pau on Mathematics and its Applications, Jaca, Spain, 10–12 September 2019; pp. 183–192. [Google Scholar]
- Volkov, V.V.; Erokhin, V.I.; Kakaev, V.V.; Onufrei, A.Y. Generalizations of Tikhonov’s regularized method of least squares to non-Euclidean vector norms. Comput. Math. Math. Phys.
**2017**, 57, 1416–1426. [Google Scholar] [CrossRef] - Rao, S. Regularized Least Square: Tikhonov Regularization Test for Hilbert Matrix. MATLAB Central File Exchange. 2020. Available online: https://es.mathworks.com/matlabcentral/fileexchange/58736-regularized-least-square-tikhonov-regularization-test-for-hilbert-matrix (accessed on 25 November 2020).
- Fitter, H.N.; Pandey, A.B.; Patel, D.D.; Mistry, J.M. A Review on Approaches for Handling Bezier Curves in CAD for Manufacturing. Procedia Eng.
**2014**, 97, 1155–1166. [Google Scholar] [CrossRef] [Green Version] - Calin, I.; Öchsner, A.; Vlase, S.; Marin, M. Improved rigidity of composite circular plates through radial ribs. Proc. Inst. Mech. Eng. Part L J. Mater. Des. Appl.
**2018**, 233. [Google Scholar] [CrossRef] - Groza, G.; Pop, N. Approximate solution of multipoint boundary value problems for linear differential equations by polynomial functions. J. Differ. Equations Appl.
**2008**, 14, 1289–1309. [Google Scholar] [CrossRef]

**Figure 1.**Initial curve (line) and curve obtained (dotted line) after changing the weights and/or control points. Left: changing the fourth weight; center: changing the fourth control point; and right: changing the fourth weight and the fourth control point.

**Figure 2.**(

**Left**): Rational basis (2) using $f\left(t\right)=t$, $g\left(t\right)=1-t$, $t\in [0,1]$. Weights $w=[1,2,3,2]$ (black line) and weights $w=[1,2,3,8]$ (blue dotted line). (

**Right**): Rational basis (2) using $f\left(t\right)=sin\left((\Delta +t)/2\right),g\left(t\right)=sin\left((\Delta -t)/2\right)$, $t\in I=[-\Delta ,\Delta ]$, $0<\Delta <\pi /2$. Weights $w=[1,2,3,2]$ (black line) and weights $w=[1,8,3,2]$ (blue dotted line).

**Figure 3.**From top to bottom. The input layer has the parameter $t\in \mathbb{R}$ as input. The hidden layer is of width $n+1$ and its parameters are the weights. Then, the output layer computes the approximation of the target curve and its parameters are the control points.

**Figure 4.**Evolution of the fitting curve ${\mathcal{N}}_{w,P}\left(t\right)$. Set of data points from the target curve (dotted) and the fitting curve (line). From top to bottom and left to right: Increment $d=0$, $d=250$, $d=500$, $d=1000$, $d=1500$, $d=2000$ and $d=3000$.

**Figure 5.**Fitting curve of degree 5 obtained using the functions $f\left(t\right)=t$ and $g\left(t\right)=1-t$, $t\in [0,1]$, and its corresponding control polygon.

**Figure 6.**(

**Left**): The x-axis represents the parameters t in $[0,1]$ and the y-axis represents the radial error value of the fitting curve obtained using the functions $f\left(t\right)=t$ and $g\left(t\right)=1-t$, $t\in [0,1]$. (

**Right**): The x-axis represents the parameters t in $[0,1]$ and the y-axis represents the curvature error value of fitting curve obtained using the functions $f\left(t\right)=t$ and $g\left(t\right)=1-t$, $t\in [0,1]$.

**Figure 7.**Fitting curve of degree 5 using $f\left(t\right)=sin\left((\Delta +t)/2\right)$ and $g\left(t\right)=sin\left((\Delta -t)/2\right)$, $t\in [-\Delta ,\Delta ]$, and its control polygon.

**Figure 8.**(

**Left**): The x-axis represents the parameters t in $[-\Delta ,\Delta ]$ and the y-axis represents the radial error value of the fitting curve obtained using the trigonometric basis. (

**Right**): The x-axis represents the parameters t in $[-\Delta ,\Delta ]$ and the y-axis represents the curvature error value of the fitting curves obtained using the trigonometric basis.

**Figure 9.**For different values of n, the history of the loss function, i.e., the mean absolute error values, through the training process on 3000 iterations while the fitting curves converges. (

**Left**): Loss values of the fitting curve using $f\left(t\right)=t$, $g\left(t\right)=1-t$, $t\in [0,1]$. (

**Right**): Loss values of the fitting curve using $f\left(t\right)=sin\left((\Delta +t)/2\right)$, $g\left(t\right)=sin\left((\Delta -t)/2\right)$, $t\in [-\Delta ,\Delta ]$. The x-axis represents the iteration of the training algorithm and the y-axis represents the mean absolute error value.

**Figure 10.**(

**Left**): Set of data points on the cycloid (dotted), fitting curve obtained using the functions $f\left(t\right)=t$, $g\left(t\right)=1-t$, $t\in [0,1]$ (blue) and fitting curve obtained using the functions $f\left(t\right)={t}^{2}$,$g\left(t\right)=1-{t}^{2}$, $t\in [0,1]$ (green). (

**Right**): Fitting error comparison.The x-axis represents the parameters t in $[0,1]$ and the y-axis represents the fitting error value of the fitting curves obtained using the functions $f\left(t\right)=t$, $g\left(t\right)=1-t$, $t\in [0,1]$ (green) and the functions $f\left(t\right)={t}^{2}$, $g\left(t\right)=1-{t}^{2}$, $t\in [0,1]$ (blue).

**Figure 11.**(

**Left**): Set of data points on the cycloid (dotted), fitting curve obtained using the trigonometric functions $f\left(t\right)=sin\left((\Delta +t)/2\right)$ and $g\left(t\right)=sin\left((\Delta -t)/2\right)$ and fitting curve obtained using the hyperbolic functions $f\left(t\right)=sinh\left((\Delta +t)/2\right)$ and $g\left(t\right)=sinh\left((\Delta -t)/2\right)$. (

**Right**): Fitting error comparison. The x-axis represents the parameters t in $[-\Delta ,\Delta ]$ and the y-axis represents the curvature error value of the fitting curve obtained using the trigonometric and hyperbolic bases.

**Figure 12.**(

**Left**): Set of data points on the Archimedean spiral (dotted) and the fitting curve of degree 11 obtained using the functions $f\left(t\right)=t$ and $g\left(t\right)=1-t$, $t\in [0,1]$. (

**Right**): The x-axis represents the parameters t in $[0,1]$ and the y-axis represents the fitting error value of the fitting curve obtained using the functions $f\left(t\right)=t$ and $g\left(t\right)=1-t$.

**Figure 13.**(

**Left**): Set of data points on the Archimedean spiral (dotted) and the fitting curve of degree 11 with the functions $f\left(t\right)=sinh\left((\Delta +t)/2\right)$ and $g\left(t\right)=sinh\left((\Delta -t)/2\right)$. (

**Right**): The x-axis represents the parameters t in $[-\Delta ,\Delta ]$ and the y-axis represents the fitting error value of the fitting curve obtained using the hyperbolic basis.

**Figure 14.**The history of the loss values, i.e., the mean absolute error values, through 3000 iterations of the training algorithm while the fitting curves converge is pictured. The blue line corresponds to the fitting curve of degree 11 obtained using $f\left(t\right)=t$ and $g\left(t\right)=1-t$, $t\in [0,1]$, and the orange line corresponds to the fitting curve of degree 11 obtained using $f\left(t\right)=sinh\left((\Delta +t)/2\right)$ and $g\left(t\right)=sinh\left((\Delta -t)/2\right)$. The x-axis represents the iteration of the training algorithm and the y-axis represents the mean absolute error value. The values are the mean of 50 repetitions.

**Figure 15.**(

**Left**): Set of data points on circle (dotted), fitting curve of degree 5 obtained with the neural network, fitting curve of degree 5 obtained with the least-squares method by using the Matlab commmand

`mldivide`(LS1), fitting curve of degree 5 obtained with the least-squares method by using the Matlab command

`SVD`(LS2). (

**Right**): The x-axis represents the parameters t in $[0,1]$ and the y-axis represents the radial error value of the fitting curves obtained with the different methods.

**Figure 16.**(

**Left**): Set of points on the Archimedean spiral curve, fitting curve of degree 11 obtained with the neural network, fitting curve of degree 11 obtained with the least-squares method by using the Matlab command

`mldivide`(LS1) and fitting curve of degree 11 obtained with the least-squares method using the Matlab command

`SVD`(LS2). (

**Right**): The x-axis represents the parameters t in $[0,1]$ and the y-axis represents the fitting error value of the fitting curves obtained with the different methods.

**Figure 17.**Noisy set of data points obtained from the baroque image (points in blue), fitting curve obtained by training the proposed neural network (points in green) and fitting curve obtained using the regularized least-squares method (pink). Baroque motif image source: Freepik.com.

**Table 1.**Loss values of the mean absolute error (12) for different fitting curves of degree n with $f\left(t\right)=t$, $g\left(t\right)=1-t$, $t\in [0,1]$ (Basis 1), $f\left(t\right)={t}^{2}$ and $g\left(t\right)=1-{t}^{2}$, $t\in [0,1]$, (Basis 2), $f\left(t\right)=sin\left((\Delta +t)/2\right)$ and $g\left(t\right)=sin\left((\Delta -t)/2\right)$, $\Delta <\pi /2$, $\Delta <\pi /2$, $t\in [-\Delta ,\Delta ]$, (Basis 3) and finally $f\left(t\right)=sinh\left((\Delta +t)/2\right)$ and $g\left(t\right)=sinh\left((\Delta -t)/2\right)$, $\Delta <\pi /2$, $t\in [-\Delta ,\Delta ]$, (Basis 4). They were all trained with 4000 iterations, $\alpha =0.0001$, ${\beta}_{1}=0.9$, ${\beta}_{2}=0.999$, $\epsilon ={10}^{-7}$. The process was repeated 5 times, with the loss values provided being the best values reached.

n | Basis 1 | Basis 2 | Basis 3 | Basis 4 |
---|---|---|---|---|

Circle | ||||

3 | 3.3946 × ${10}^{-2}$ | 3.7129 × ${10}^{-2}$ | 7.0468 × ${10}^{-2}$ | 3.6438 × ${10}^{-2}$ |

4 | 2.1757 × ${10}^{-3}$ | 1.5338 × ${10}^{-2}$ | 3.1678 × ${10}^{-3}$ | 2.5582 × ${10}^{-3}$ |

5 | 1.7333 × ${10}^{-4}$ | 9.2269 × ${10}^{-3}$ | 2.8083 × ${10}^{-4}$ | 2.2488 × ${10}^{-3}$ |

Cycloid | ||||

8 | 1.0849 × ${10}^{-3}$ | 3.6855 × ${10}^{-4}$ | 3.6017 × ${10}^{-4}$ | 3.1674 × ${10}^{-4}$ |

9 | 4.6163 × ${10}^{-4}$ | 3.6855 × ${10}^{-4}$ | 3.6017 × ${10}^{-4}$ | 2.4914 × ${10}^{-4}$ |

10 | 3.3944 × ${10}^{-4}$ | 3.6855 × ${10}^{-4}$ | 3.6017 × ${10}^{-4}$ | 2.4914 × ${10}^{-4}$ |

Archimedean spiral | ||||

11 | 1.5982 × ${10}^{-3}$ | 1.0474 × ${10}^{-2}$ | 2.2349 × ${10}^{-2}$ | 7.8109 × ${10}^{-4}$ |

12 | 1.5982 × ${10}^{-3}$ | 7.8916 × ${10}^{-3}$ | 5.7801 × ${10}^{-3}$ | 7.8109 × ${10}^{-4}$ |

13 | 1.4106 × ${10}^{-3}$ | 5.2853 × ${10}^{-3}$ | 5.7801 × ${10}^{-3}$ | 7.8109 × ${10}^{-4}$ |

**Table 2.**Time of execution of the proposed algorithm measured in seconds for different numbers of units and iterations. The values provided are the mean of 5 repetitions with a set of data points of size 100.

$\mathit{n}+1$ | Number of Iterations | ||||
---|---|---|---|---|---|

1 | 25 | 50 | 100 | 3000 | |

5 | 0.1259 | 1.4284 | 2.8381 | 5.7259 | 189.5386 |

10 | 0.0989 | 2.0781 | 4.1325 | 10.2672 | 268.8726 |

15 | 0.1244 | 2.7142 | 5.3781 | 10.9886 | 347.6139 |

50 | 0.6479 | 8.2589 | 13.3576 | 27.4398 | 850.6713 |

100 | 1.1624 | 14.3999 | 32.6576 | 65.3298 | 1521.3971 |

**Table 3.**For different number of weights and control points, the time of execution of the least-squares method using the Matlab commands

`mldivide`and

`SVD`, and the time of execution of the Algorithm 1 for 3000 iterations are provided. The values have been measured in seconds with a set of data points of size 100 and they are the mean of 5 repetitions.

$\mathit{n}+1$ | Least-Squares Method | Algorithm 1 | |
---|---|---|---|

mldivide | SVD | 3000 Iterations | |

5 | 63.3014 | 65.6015 | 189.5386 |

10 | 167.0402 | 237.0386 | 268.8726 |

15 | 309.5811 | 457.8987 | 347.6139 |

20 | 478.1141 | 774.3472 | 429.2819 |

30 | 961.2860 | 978.6830 | 568.5647 |

50 | 2552.4064 | 4097.9278 | 850.6713 |

Publisher’s Note: MDPI stays neutral with regard to jurisdictional claims in published maps and institutional affiliations. |

© 2020 by the authors. Licensee MDPI, Basel, Switzerland. This article is an open access article distributed under the terms and conditions of the Creative Commons Attribution (CC BY) license (http://creativecommons.org/licenses/by/4.0/).

## Share and Cite

**MDPI and ACS Style**

Gonzalez-Diaz, R.; Mainar, E.; Paluzo-Hidalgo, E.; Rubio, B.
Neural-Network-Based Curve Fitting Using Totally Positive Rational Bases. *Mathematics* **2020**, *8*, 2197.
https://doi.org/10.3390/math8122197

**AMA Style**

Gonzalez-Diaz R, Mainar E, Paluzo-Hidalgo E, Rubio B.
Neural-Network-Based Curve Fitting Using Totally Positive Rational Bases. *Mathematics*. 2020; 8(12):2197.
https://doi.org/10.3390/math8122197

**Chicago/Turabian Style**

Gonzalez-Diaz, Rocio, E. Mainar, Eduardo Paluzo-Hidalgo, and B. Rubio.
2020. "Neural-Network-Based Curve Fitting Using Totally Positive Rational Bases" *Mathematics* 8, no. 12: 2197.
https://doi.org/10.3390/math8122197