# Adjustment of Planned Surveying and Geodetic Networks Using Second-Order Nonlinear Programming Methods

^{*}

Next Article in Journal

Previous Article in Journal

Department of Engineering Geodesy, Saint-Petersburg Mining University, 199106 Saint-Petersburg, Russia

Author to whom correspondence should be addressed.

Academic Editor: Demos Tsahalis

Received: 8 November 2021
/
Revised: 29 November 2021
/
Accepted: 1 December 2021
/
Published: 3 December 2021

(This article belongs to the Section Computational Engineering)

Due to the huge amount of redundant data, the problem arises of finding a single integral solution that will satisfy numerous possible accuracy options. Mathematical processing of such measurements by traditional geodetic methods can take significant time and at the same time does not provide the required accuracy. This article discusses the application of nonlinear programming methods in the computational process for geodetic data. Thanks to the development of computer technology, a modern surveyor can solve new emerging production problems using nonlinear programming methods—preliminary computational experiments that allow evaluating the effectiveness of a particular method for solving a specific problem. The efficiency and performance comparison of various nonlinear programming methods in the course of trilateration network equalization on a plane is shown. An algorithm of the modified second-order Newton’s method is proposed, based on the use of the matrix of second partial derivatives and the Powell and the Davis–Sven–Kempy (DSK) method in the computational process. The new method makes it possible to simplify the computational process, allows the user not to calculate the preliminary values of the determined parameters with high accuracy, since the use of this method makes it possible to expand the region of convergence of the problem solution.

Over the past thirty years, surveying and geodetic equipment has made a great leap forward. Such a rapid development of technology allowed surveyors to receive and process an enormous amount of data about objects. The use of devices, such as tacheometers, laser trackers and scanning laser systems, as well as satellites in surveying and geodetic practice, made it possible to increase the speed and accuracy of the data obtained. The use of modern surveying and geodetic methods in the construction of buildings is especially important; this is noted in works [1,2], as well as when determining deformations [3].

An important element in the solution of any surveying and geodetic problems is the office processing of measurement results (rejection of gross errors, equalization, assessment of the accuracy of the solutions obtained). Redundancy of measurements increases the accuracy and plausibility of the obtained solutions; however, as the amount of information obtained increases, the complexity of data processing also increases, as noted in articles [4,5]. There is a need to use computers with high performance characteristics in order to solve the problem in special software; this thesis is confirmed in the works of L.A. Goldobina [6], N.S. Kopylova [7], V.F. Kovyazina [8], A.M. Rybkina [9,10] et al. [11,12]. The development of computer technology makes it possible to automate the solution of many engineering problems, as well as to carry out computational experiments by modeling; in the works of authors, such as P.A. Demenkova. [13,14], N.V. Vasilyeva [15], E.V. Katuntsova [16] and A.A. Kochnevoy [17] et al. [18,19,20], special software products were used to solve engineering problems. Nevertheless, the redundancy of measurements allows the surveyor to choose the optimal solution, taking into account the limiting criteria. However, in a situation where the data array is huge and the power of the computer does not allow quick processing and the obtaining of the result, the solution process can be optimized by various methods. In this regard, the topic of optimizing solutions for various industrial surveying and geodetic problems is very relevant; this idea finds its confirmation in articles [21,22,23].

In the mathematical aspect, optimization should be understood as a sequence of actions, the implementation of which contributes to obtaining a solution or clarifying an existing one. Optimization methods have been used for a long time in geodesy and surveying; the famous geodesist and mathematician K. Gauss is the author of many papers on this topic. There are many groups of optimization methods that can be applied in geodesy and surveying. It should be noted that the problems associated with solving nonlinear equations differ from linear problems in that there is no single, standard solution method. Depending on the restrictive conditions and the objective function type, a different set of solutions can be obtained, the best of which shall be chosen. Therefore, the study of the possibility of using various methods to solve problems of a certain group is the best way to choose an optimization method for solving a specific problem. The article discusses methods of nonlinear programming. For a number of reasons, these methods are best suited for their implementation in the surveying and geodetic computational process, namely:

- (1)
- nonlinear programming methods allow the nonlinear and linear conditions that limit the objective function to be taken into account;
- (2)
- these methods allow the solving of large systems of equations using algorithms that are most suitable for implementation on modern computers;
- (3)
- using some nonlinear programming methods (such as second-order Newton’s method) makes it possible to solve nonlinear equations without linearizing the original parametric equations;
- (4)
- using nonlinear programming methods, it is possible to obtain a solution not only using the objective function of the least squares method, which is a classical method in geodesy and surveying, but also in other ways in accordance with the selected criterion function.

The third point is especially important, since there are a lot of problems in geodesy and surveying, where the desired parameters can be determined by solving nonlinear systems of equations, for example: calculating transition keys, equalizing surveying and geodetic networks and building terrain models. The fourth point makes it possible to experiment and choose other optimization criteria, different from the least squares method.

From all of the above, it can be concluded that it is advisable not only to apply nonlinear programming methods in surveying and geodetic computations, but also to improve their algorithms for geodesy and surveying in the future. Among the methods of nonlinear programming, two main groups can be distinguished—these are methods based on the use of derivatives of various orders and methods that calculate the extremum point without using derivatives (direct search methods). In this work, the methods of the first group are considered in detail, since their use provides a number of advantages:

- (1)
- a large number of previously developed methods that have clearly formulated algorithms that are easy to implement with a computer;
- (2)
- the ability to use several methods at once at different stages of solving one problem, in order to obtain the best result.

It should be said that the methods of this group have serious downsides. The main one is the preliminary preparation of the problem for the solution. It is necessary to calculate derivatives of different orders at each iteration; for this, an algorithm is drawn up in advance, according to which the derivatives will be calculated for a specific objective function. It takes a particularly long time if the function is not specified analytically, then it becomes necessary to calculate the derivatives by a numerical method. It is also necessary to take into account that the objective function shall be continuous, otherwise the problem will have no solution. These downsides are reflected by G.G. Shevchenko in her works [24,25] and she proposes to use direct search methods (which do not use derivatives in the iterative process) when solving surveying and geodetic optimization problems.

The article analyzes the possibility of applying the second-order Newton’s method, when solving surveying and geodetic optimization problems; in particular, when equalizing the surveying and geodetic network of trilateration on a plane. Today, design and equalization of surveying and geodetic constructions using new methods is a very relevant topic; this is confirmed in works [24,25,26]. Newton’s method was chosen because it has the following upsides:

- (1)
- the method has a quadratic convergence rate of the iterative process, in contrast to first-order methods (gradient methods), which have a linear convergence rate;
- (2)
- for any quadratic objective function with a positive definite matrix of second partial derivatives (Hessian matrix), the method gives an exact solution in one iteration;
- (3)
- low sensitivity to the choice of preliminary values of the determined parameters, in comparison with gradient methods.

The second-order Newton’s method was used to equalize the planned trilateration network.

The second-order Newton’s method is included in the group of nonlinear programming methods of the second order [26,27]. More generally, the second-order Newton’s method is an iterative method that applies a quadratic approximation to the original nonlinear objective function at each iteration. To evaluate the convergence of the method, a necessary condition is the threefold differentiability of the studied function. The existence of the second derivative at the extremum point provides a high rate of convergence of the method, in comparison with the first-order methods [28,29]. The method was studied in detail in the work of N.N. Eliseeva [30] and in the works [31,32,33], and was also applied by the authors of the article in [34]. However, the possibility of using the method in surveying and geodetic practice, when solving production problems, has almost not been studied. There were a number of objective reasons for this, which will be discussed below.

To derive the main formula of the second-order Newton’s method, it is necessary to expand the original objective function in a Taylor series (1):
where ${f}^{\prime}({x}^{\otimes})$ is the first-order derivative with respect to the function $f({x}^{\otimes})$, ${x}^{\otimes}$ is the minimum point of the function, ${f}^{\u2033}({x}^{\otimes})$ is the matrix of the second derivatives of the objective function $f(x)$ in point ${x}^{\otimes}$.

$$f(x)\approx f({x}^{\otimes})+{f}^{\prime}({x}^{\otimes})\cdot (x-{x}^{\otimes})+\frac{1}{2}{f}^{\u2033}({x}^{\otimes})\cdot {(x-{x}^{\otimes})}^{2},$$

The second-order Newton’s method is based on the quadratic approximation of the function; therefore, the first three terms are taken into account in the Taylor series to derive the iterative formula [27,35]. Having received the value ${x}^{\otimes}$, it is possible to calculate the next approximation ${x}_{k+1}$ to the extremum point. Replacing in the Formula (1) ${x}^{\otimes}$ with ${x}_{k}$, and $x$ with ${x}_{k+1}$, also marking $\mathsf{\Delta}{x}_{k}={x}_{k+1}-{x}_{k}$, one can get the Formula (2):

$$f({x}_{k+1})\approx f({x}_{k})+{f}^{\prime}({x}_{k})\cdot \mathsf{\Delta}{x}_{k}+\frac{1}{2}{f}^{\u2033}({x}_{k})\cdot \mathsf{\Delta}{x}_{k}^{2}.$$

To determine the extremum in the direction $\mathsf{\Delta}{x}_{k}$, it is necessary to differentiate the function $f({x}_{k+1})$ for each of the components $\mathsf{\Delta}{x}_{k}$ and equate the resulting expression to zero (3):

$${f}^{\prime}({x}_{k})+{f}^{\u2033}({x}_{k})\cdot \mathsf{\Delta}{x}_{k}=0.$$

Expressing the variable ${x}_{k+1}$ from Formula (3), the main formula of the second-order Newton’s method is obtained, according to which the iterative process (4) is constructed:
where ${f}^{\prime}({x}_{k})$ is the first derivative of the function $f(x)$ in point ${x}_{k}$ in the approximation $k$; ${f}^{\u2033}({x}_{k})$ is the second derivative of the function $f(x)$ in point ${x}_{k}$ in the approximation $k$.

$${x}_{k+1}={x}_{k}-\frac{{f}^{\prime}({x}_{k})}{{f}^{\u2033}({x}_{k})},$$

Formula (4) describes an iterative process for a function of one variable. By writing expression (4) in matrix form (5), one can obtain an iterative formula of the method for the multidimensional case (functions of several variables):
where $\nabla {f}_{k}$ is the column vector of the matrix of the first derivatives (gradient) of the objective function in the approximation $k$, ${H}_{k}$ is the matrix of the second partial derivatives (Hessian matrix) of the objective function with the target dimension of $n\times n$ in the approximation $k$; ${X}_{k}$ is the column vector of the determined parameters in the approximation $k$; ${X}_{k+1}$ is the column vector of the determined parameters in the approximation $k+1$ [33].

$${X}_{k+1}={X}_{k}-{H}_{k}^{-1}\cdot \nabla {f}_{k},$$

A distinctive feature of the classical second-order Newton’s method is that it is not necessary to determine the iteration step in the iterative process. The rate of convergence, as well as the direction of the search, depends on the Hessian matrix (6):

$$H({x}_{1},\dots ,{x}_{n})=\left(\begin{array}{cccc}\frac{{\partial}^{2}f({x}^{1},\dots ,{x}^{n})}{\partial {x}^{1}\partial {x}^{1}}& \frac{{\partial}^{2}f({x}^{1},\dots ,{x}^{n})}{\partial {x}^{1}\partial {x}^{2}}& \cdots & \frac{{\partial}^{2}f({x}^{1},\dots ,{x}^{n})}{\partial {x}^{1}\partial {x}^{n}}\\ \frac{{\partial}^{2}f({x}^{1},\dots ,{x}^{n})}{\partial {x}^{2}\partial {x}^{1}}& \frac{{\partial}^{2}f({x}^{1},\dots ,{x}^{n})}{\partial {x}^{2}\partial {x}^{2}}& \cdots & \frac{{\partial}^{2}f({x}^{1},\dots ,{x}^{n})}{\partial {x}^{2}\partial {x}^{n}}\\ \cdots & \cdots & \ddots & \\ \frac{{\partial}^{2}f({x}^{1},\dots ,{x}^{n})}{\partial {x}^{n}\partial {x}^{1}}& \frac{{\partial}^{2}f({x}^{1},\dots ,{x}^{n})}{\partial {x}^{n}\partial {x}^{2}}& \cdots & \frac{{\partial}^{2}f({x}^{1},\dots ,{x}^{n})}{\partial {x}^{n}\partial {x}^{n}}\end{array}\right).$$

The main downside of the second-order Newton’s method is the calculation of the Hessian matrix [27]; therefore, this method is little used in practice, since the calculation of the Hessian matrix at each iteration is a rather complicated computational process. However, the introduction of personal computers made it possible to automate the process of calculating the Hessian matrix. The calculation of partial derivatives can be implemented by a numerical method using one of the programming languages. Due to this, the problem of calculating partial derivatives for any objective function can be fully automated.

At each iteration, it is necessary to determine the sign of the Hessian matrix. The matrix of the second partial derivatives at each iteration shall be positive definite $H(f)>0$; only if this condition is met, will the search direction lead to a decrease in the objective function $f(x)$. In iterations where the Hessian matrix is negatively defined, $H(f)<0$, the direction of the search for the minimum target shall be replaced. In this work, it is proposed to use gradient methods at iterations where the matrix of second derivatives is negative to determine the direction of decrease of the objective function [36].

The main advantages of the second-order Newton’s method:

- (1)
- if the function is quadratic, then to find the minimum of the objective function $f(x)$, when the preliminary values of the determined parameters are close to the true ones, one iteration is required;
- (3)
- the use of the second partial derivatives in the iterative process allows the increase of the convergence rate, and also to increase the accuracy of the results;
- (3)
- this method is less sensitive to the choice of the initial value of the parameter than the first-order methods.

If the objective function $f(x)$ is not quadratic, then $k$ iterations are required to reach the extremum point until the condition for stopping the iterative process is satisfied [37].

Second-order Newton’s method has a number of disadvantages that must be taken into account when implementing it. Calculating the first and second-order derivatives (finite differences) numerically, the accuracy and speed of the method decreases. This is not only due to approximate calculations, but also due to inaccurate approximation of the original objective function. This aspect is especially perceptible in space, around the minimum point, since the first-order derivatives become rather small quantities. When the objective function is not quadratic, the iterative process can loop. As mentioned above, it is necessary at each iteration to check the positive definiteness of the Hessian matrix, since this is the main condition for the convergence of the method. The sign of the Hessian matrix is checked by the Sylvester criterion. The complexity of setting the initial parameter when the function is defined to a small extent (lack of initial data). The need to calculate the second partial derivatives of the function to be minimized. As stated above, the second-order Newton’s method was not used in geodesy and surveying due to the complexity of its execution (at that time, the impossibility of complete automation of the computational process).

The convergence rate of Newton’s method in the vicinity of a strictly local minimum point is very high (quadratic). The method will not work if the Hessian matrix is degenerate (the determinant of the matrix is zero), and this method may also diverge [38].

The high rate of convergence of Newton’s method can be explained by the fact that the quadratic trinomial (2), constructed by taking into account information about both the first and the second derivatives of the objective function, approximates a convex twice differentiable nonlinear function with high accuracy in a sufficiently small neighborhood of this point.

The process of finding the optimal solution using nonlinear programming methods has an iterative nature, which means that, with an increase in the number of iterations, the probability of arriving at the correct solution increases. An important element of the correct operation of all iterative methods is the criterion (rule) for stopping the computational process. It is this criterion that sets the accuracy (from the point of view of mathematics, not geodesy) of achieving a solution, as well as the effectiveness of the method and the amount of computation.

The following stopping criteria are most common in optimization theory:

- By the absolute value of the difference between the subsequent and previous values of the determined parameter (7):$$\left|{x}_{k+1}-{x}_{k}\right|\le \epsilon .$$
- By the absolute value of the difference between the values of the objective function, the next and the previous iteration (8):$$\left|f({x}_{k+1})-f({x}_{k})\right|\le \epsilon .$$
- By the absolute value of the derivative of the objective function at the current iteration (9):$$\left|\frac{\partial f(x)}{\partial x}\le \epsilon \right|.$$

Using only one of the criteria can lead to a “false” decision; therefore, it is recommended to take into account several installation criteria in the software algorithm. In all three criteria, the values are less than a known number $\epsilon $. The user sets this number himself /herself, based on practical experience in solving problems, or after calculating it using formulas.

In this work, the authors analyze the data obtained using methods of the first and second-order, which in the iterative process use derivatives of various orders. Therefore, the authors consider it necessary to note in the work the methods allowing the calculation of derivatives of various orders.

One way to calculate derivatives is by numerical differentiation. Mathematicians turn to it when calculating derivatives of functions given in a table or direct differentiation is difficult. The latter, for example, arises in the case of a complex analytical form of a function. Then, the derivative is interpolated. To calculate the first-order derivative, the Formula (10) can be used:
where $\frac{\delta f({x}_{1},\dots ,{x}_{n})}{\delta {x}_{n}}$ is the first derivative of the objective function $f({x}_{1},\dots ,{x}_{n})$ with respect to the parameter ${x}_{n}$; $h$ is a small increment to the objective function argument.

$$\frac{\delta f({x}_{1},\dots ,{x}_{n})}{\delta {x}_{n}}=\frac{f({x}_{1}+h,\dots ,{x}_{n})-f({x}_{1}-h,\dots ,{x}_{n})}{2\cdot h},$$

The increment value $h$ affects the accuracy of the resulting derivative value and the amount of computation. If selecting a very small $h$ round-off error, when calculating with a computer, it can be comparable to or greater than $h$. An algorithm that reduces the error in calculating the derivative is represented by Formula (10). Formula (10) is called the central difference scheme and, according to [33], is the best way to calculate the first-order derivative.

By analogy with obtaining a difference scheme for calculating the first derivative, one can obtain a formula for calculating the second-order derivative of the objective function. The formula for calculating the second-order derivative has the Formula (11):
where $\frac{\delta f({x}_{1},\dots ,{x}_{n})}{\delta {x}_{n}\delta {x}_{n}}$ is the second-order derivative of the objective function $f({x}_{1},\dots ,{x}_{n})$ with respect to parameter ${x}_{n}$.

$$\frac{\delta f({x}_{1},\dots ,{x}_{n})}{\delta {x}_{n}\delta {x}_{n}}=\frac{f({x}_{1}+h,\dots ,{x}_{n})-2f({x}_{1},\dots ,{x}_{n})+f({x}_{1}-h,\dots ,{x}_{n})}{{h}^{2}},$$

In the article, the second-order Newton’s method is compared with the conjugate gradient method (the first-order method). This method was chosen because it is the most common in geodetic practice and was used by such well-known surveyors as A.V. Zubov. [39,40], V.A. Kougia [37,41], B.T. Mazurov [42], S.G. Shnitko [43] and other Russian [1,44,45] and foreign [46,47] specialists. The main advantage is that the algorithm does not use second derivatives.

The conjugate gradient method is a kind of continuation of the development of the steepest descent method, which combines two concepts: the gradient of the objective function and the conjugate direction of vectors. The main iterative formula of the method is written in the Formula (12):
where ${P}_{k}$ is the unit vector of conjugate directions; ${\lambda}_{k}$ is the length of the movement step at each iteration.

$${x}_{k+1}={x}_{k}-{\lambda}_{k}\cdot {P}_{k},$$

At the zero iteration, the unit vector ${P}_{k}$ is taken to be equal to ${P}_{0}=\nabla f({x}_{1},\dots ,{x}_{n}).$ In subsequent calculations, the vector ${P}_{k}$ can be calculated using the Formula (13):
where ${\beta}_{k}$ is the weighting factor that is used to determine the conjugate directions.

$${P}_{k}=\nabla f{({x}_{1},\dots ,{x}_{n})}_{k}+{\beta}_{k}\cdot {P}_{k-1},$$

The weighting factor ${\beta}_{k}$ can be determined using the Fletcher–Reeves Formula (14):

$${\beta}_{k}=\frac{{\left|\nabla f{({x}_{1},\dots ,{x}_{n})}_{k}\right|}^{2}}{{\left|\nabla f{({x}_{1},\dots ,{x}_{n})}_{k-1}\right|}^{2}}.$$

According to the presented formulas, the new conjugate direction is obtained by adding the antigradient at the turning point and the previous direction of movement, multiplied by a coefficient ${\beta}_{k}$. Thus, the conjugate gradient method creates a search direction, to the optimal value using the information about the search obtained at the previous stages of the descent.

It is worth noting that the works [27,36] noted the benefit of restarting the algorithmic procedure every $n+1$ steps ($n$ is the number of parameters to be determined). A restart of the computational procedure is necessary in order to “erase” the last direction of the search and start the search algorithm anew in the direction of the fastest descent.

As noted above, the value of step ${\lambda}_{k}$ affects the performance of the method. The step size in the conjugate gradient method is selected from the condition of the minimum objective function in the direction of motion, that is, as a result of solving the problem of one-dimensional optimization in the direction of the antigradient.

As mentioned above, the second-order Newton’s method was applied to equalize the trilateration network; the network configuration is shown in Figure 1.

The purpose of solving the problem is to calculate the plane coordinates of points 1–5. To determine the coordinates of the points, an objective function was compiled, with the restrictive condition for minimizing the sum of the squares of the Formula (15):
where $n$ is the number of measured distances between points, ${p}_{i}$ is weights of the measured sides, ${S}_{{c}_{i}}$ is the vector of calculated distances, ${S}_{{m}_{i}}$ is the vector of measured distances, $z$ is the objective function argument.

$$f(z)={\displaystyle \sum _{i=1}^{n}\left[{p}_{i}\cdot {({S}_{{c}_{i}}-{S}_{{m}_{i}})}^{2}\right]},$$

Vector ${S}_{{c}_{i}}$ elements are calculated by the Formula (16):
where ${X}_{E},{Y}_{E}$ are the coordinates of the end point of the side, ${X}_{S},{Y}_{S}$ are the coordinates of the starting point of the side.

$${S}_{c}=\sqrt{{({X}_{E}-{X}_{S})}^{2}+{({Y}_{E}-{Y}_{S})}^{2}},$$

Traditionally, surveying and geodetic networks are adjusted using a parametric method. The essence of this method is:

- (1)
- drawing up parametric communication equations;
- (2)
- linearization of these equations by expanding into a Taylor series taking into account only first-order derivatives;
- (3)
- solution of the obtained systems of equations based on the least squares method.

As can be seen, the traditional method of equalization does not allow the use of objective functions other than the least squares method. Using nonlinear programming methods, such a possibility appears. Therefore, the network equalization (Figure 1) was also performed using the objective function, which is the minimum of the sum of the modules of the distance corrections. This objective function is expressed by the Formula (17):

$$f(z)={\displaystyle \sum _{i=1}^{n}\left[{p}_{i}\cdot \left|{S}_{{c}_{i}}-{S}_{{m}_{i}}\right|\right]}.$$

The coordinates of the starting points are presented in Table 1.

The lengths of lines are presented in Table 2. Due to the uniformity of the measurements, the weights ${p}_{i}$ of all measured sides were taken as equal to one.

Applying the second-order Newton’s method to find the minimum of the objective function (5), the search process was carried out in the MathCAD15 environment. A part of the program is shown in Figure 2.

The preliminary values of the coordinates of the points, as well as the data obtained during the iterative process when using two methods, are presented in Table 3.

The values of the preliminary coordinates were taken specifically far from the minimum point of the objective function (15). This was done to test the main advantage of the second-order Newton’s method, namely, the small dependence of the convergence of the method on the preliminary values of the determined parameters, in comparison with gradient methods. The preliminary values of the parameters are presented in Table 4. Table 4 also presents the data obtained during the study.

The use of nonlinear programming methods makes it possible to perform equalization, not only using the objective function of the least squares method (15) but also using the objective function of the least modulus method (17). The data obtained using the new objective function are presented in Table 5.

As can be seen from Table 3, the coordinates of the determined points of the network were obtained in three approximations, using the second-order Newton’s method. The criterion for stopping the search process was the value $\epsilon $. When solving the problem $\epsilon $, it was taken as equal to 0.001 m. Figure 3 shows a simplified visualization of an iterative process that was performed using two methods. After analyzing Table 3 and Figure 3, we can conclude that Newton’s method is the most efficient in comparison with the gradient method.

When using Newton’s method, the iterative process is not built according to a linear law, as in the gradient method. The use of second-order partial derivatives allows us to talk about a quadratic approximation of the objective function, which in turn reduces the number of iterations. From Figure 3, it can be seen that, when approaching the minimum point of function (15), the size of the iteration step in the gradient method decreases, which in turn increases the number of calculations.

According to the data presented in Table 3, it can be seen that the values of the coordinates of the points calculated by two different methods differ. This can be explained by the fact that, in the course of linearization according to Newton’s method, the second partial derivatives of the function are used, which are responsible for the concavity of the function and allow the smallest value along the curve to be found. While, for the gradient method to work, only the values of the first derivatives are required (geometrically, this is a tangent), so the minimum value can only be found at the ends of a straight line, not along it.

As mentioned above, the main advantage of Newton’s method is the lesser dependence of the convergence of the method on the choice of preliminary values of the sought parameters, in comparison with the gradient methods. The coordinates of the network points were calculated using preliminary values that were set specifically far from the minimum point. Table 4 shows the data obtained during the iterative process. The number of iterations has increased for two methods, but the Newton’s method has an order of magnitude less than the gradient method. The discrepancy (more than 100 m) of the preliminary values and the values of the obtained coordinates still affected the accuracy of obtaining the latter, since the value of the objective function increased.

The use of nonlinear programming methods made it possible to calculate the coordinates of points not only using the objective function (15), but also using the objective function of the method of least modules (17). The data obtained during the execution of the iterative process are presented in Table 5. It is worth noting that, despite the close location of the preliminary values to the minimum point, the number of iterations increased in the two methods, compared to the options when the objective function was used (15). The function values have also increased. Analyzing the data presented in Table 5, we can say that, using the gradient method, most likely the point of a local minimum was determined, since the value of the objective function is large enough.

Today it is necessary to develop an algorithm, the use of which allows the user to obtain a correct answer with high accuracy in a short period of time and without taking into account the influence of the preliminary values of the determined parameters. The second-order Newton’s method has such resources, due to the use of the second partial derivatives of the objective function, the speed of solving the problem is higher, with a smaller number of approximations compared to the methods of the first-order.

However, in the course of a computational experiment, it was found that this method does not give the correct solution for all preliminary values of the parameters, sometimes the method simply does not work. This is primarily due to the fact that the Hessian matrix indicates the direction of decreasing the function, only if it is positive definite. Therefore, the user needs to prepare the problem for the solution, namely, to calculate the preliminary values, taking into account that they do not make the Hessian matrix negative. If this condition is not followed, then the method may diverge and the method loses its main advantage—the speed of the solution. Using only direct search methods expands the range of selection of preliminary values, since these methods have no restrictions on the sign of derivatives, since derivatives are not used in the iterative process; however, it is necessary to set more conditions for calculating different values of the objective function, which complicates the search process and increases the search time.

The authors of the article propose the creation of a software algorithm based on the second-order Newton’s method and on direct search methods, in particular the Powell method and the Davis–Sven–Kempy (DSK) method. The use of this software algorithm will enhance the positive aspects of the second-order Newton’s method, namely, to reduce the dependence on the preliminary values of the determined parameters. It would be convenient for the user to use an algorithm in which the number of iterations does not depend on the preliminary values of the parameters being determined. The main reason for combining the second-order Newton’s method with direct search methods is to increase the potential of the method in terms of increasing the speed of the optimization process. A combination of direct search methods, namely the DSK method and the Powell method, was used to create a modified second-order Newton’s method.

The essence of the algorithm based on a combination of the DSK method and the Powell method is as follows [48]:

- The objective function $F({x}^{1},{x}^{2},\dots ,{x}^{n})$, depending on the parameters ${x}^{1},{x}^{2},\dots ,{x}^{n}$ to be determined, is set.
- The preliminary value of the parameter ${x}^{1}$ and the increment step $\mathsf{\Delta}{x}_{1}$ are set.
- The increment $\mathsf{\Delta}{x}_{1}$ is added and subtracted only to the first parameter ${x}^{1}$, the rest of the parameters are also given preliminary values, but they remain unchanged.
- The values of the objective function are calculated with the changed parameters $F({x}^{1}-\mathsf{\Delta}{x}_{1},{x}^{2},\dots ,{x}^{n})$ and $F({x}^{1}+\mathsf{\Delta}{x}_{1},{x}^{2},\dots ,{x}^{n})$
- The new value of the determined parameter is calculated by the Formula (18):$${x}^{1*}={x}^{1}+\frac{\mathsf{\Delta}{x}_{1}(F({x}^{1}-\mathsf{\Delta}{x}_{1},{x}^{2},\dots ,{x}^{n})-F({x}^{1}+\mathsf{\Delta}{x}_{1},{x}^{2},\dots ,{x}^{n}))}{2\cdot (F({x}^{1}-\mathsf{\Delta}{x}_{1},{x}^{2},\dots ,{x}^{n})-F({x}^{1},{x}^{2},\dots ,{x}^{n})+F({x}^{1}+\mathsf{\Delta}{x}_{1},{x}^{2},\dots ,{x}^{n}))}.$$
- The next parameter ${x}^{2}$ is changed and the new value of the function is calculated, only the value ${x}^{1*}$ is substituted into the target function instead of the parameter ${x}^{1}$.

In general, this method may require a sufficiently large number of iterations to find the optimal solution. However, its main advantage is that its solution area is much larger in comparison with the classical second-order Newton’s method.

The authors of the article have developed the following algorithm to minimize the main disadvantages of the Newton’s method algorithm:

- Step 1: The user creates an objective function $F({x}^{1},{x}^{2},\dots ,{x}^{n})$ and chooses with what constraint he/she will find the minimum of the objective function (by the method of least squares or by the method of least modules); it is recommended to use the least squares method for solving geodetic tasks;
- Step 2: Sets any preliminary values of the parameters to be determined (it is recommended to set either previously known to true values or accept all parameters as equal to zero);
- Step 3: using the methods of quadratic approximation, namely the Powell–DSK method, in two approximations, the preliminary values are refined according to Formula (18);
- Step 4: The Hessian matrix is created, and its positiveness is checked; if the condition is met, then the revised preliminary values are used in the next step. If the Hessian matrix is not positive, then step 3 is performed again;
- Step 5: The obtained refined preliminary values are used in the second-order Newton’s method, the matrix of the first derivatives and the matrix of the second derivatives are formed;
- Step 6: An iterative process is performed according to Formula (5) until the stopping criterion is met (Formula (7)). The stopping criterion is chosen by the user;
- Step 7: The accuracy of the obtained parameter values is evaluated. To estimate the accuracy of the obtained parameters, an inverse weight matrix is used.

When performing the equalization of surveying and geodetic measurements using nonlinear programming methods, difficulties arise in performing the accuracy assessment, since the iterative process of the first and second-order methods does not require compiling a matrix of normal equations of unknowns; therefore, it is not possible to find the inverse weight matrix $Q$. The assessment of the accuracy of the data obtained during the use of nonlinear programming methods was given attention in the works of G.V. Makarov. [49], V.I. Mitskevich [35,50,51,52], Ch.N. Zheltko [41]. V.I. Mitskevich notes that a fragment of the inverse matrix of weights can be obtained by the generalized method of G.V. Makarov in [49,53].

The procedure for evaluating the accuracy of the coordinates of the points of the mine surveying and geodetic network obtained by nonlinear programming methods is given in [54,55,56,57,58,59]. The accuracy estimation algorithm using nonlinear programming methods is described in detail in [60]. V.I. Mitskevich in his works [50,52,60] asserts that, when optimizing using the objective function of the least squares method according to Newton’s second-order method, to compose the inverse weight matrix, one can use the Hessian matrix according to Formula (19):

$$Q=2{H}^{-1}=2\cdot {\left(\begin{array}{cccc}\frac{{\partial}^{2}f({x}^{1},\dots ,{x}^{n})}{\partial {x}^{1}\partial {x}^{1}}& \frac{{\partial}^{2}f({x}^{1},\dots ,{x}^{n})}{\partial {x}^{1}\partial {x}^{2}}& \cdots & \frac{{\partial}^{2}f({x}^{1},\dots ,{x}^{n})}{\partial {x}^{1}\partial {x}^{n}}\\ \frac{{\partial}^{2}f({x}^{1},\dots ,{x}^{n})}{\partial {x}^{2}\partial {x}^{1}}& \frac{{\partial}^{2}f({x}^{1},\dots ,{x}^{n})}{\partial {x}^{2}\partial {x}^{2}}& \cdots & \frac{{\partial}^{2}f({x}^{1},\dots ,{x}^{n})}{\partial {x}^{2}\partial {x}^{n}}\\ \cdots & \cdots & \ddots & \\ \frac{{\partial}^{2}f({x}^{1},\dots ,{x}^{n})}{\partial {x}^{n}\partial {x}^{1}}& \frac{{\partial}^{2}f({x}^{1},\dots ,{x}^{n})}{\partial {x}^{n}\partial {x}^{1}}& \cdots & \frac{{\partial}^{2}f({x}^{1},\dots ,{x}^{n})}{\partial {x}^{n}\partial {x}^{n}}\end{array}\right)}^{-1}.$$

In the article, to assess the accuracy of the obtained values of the parameters determined by the second-order Newton’s method, the inverse weight matrix will be compiled according to Formula (19).

The modified Newton’s method was applied with preliminary values of the coordinates of the determined points, at which the classical second-order Newton’s method does not work, since it diverges. The data obtained during the iterative process are presented in Table 6. Table 6 also presents data that make it possible to assess the accuracy of the obtained coordinate values, namely, the root-mean-square errors of the coordinates of the item being determined and the mean-square error of the unit of weight.

The Modified second-order Newton method should also be compared with the Broyden–Fletcher–Goldfarb–Shanno method (BFGS). It should be noted that, in contrast to the classical second-order Newton method, the Hessian matrix is not calculated directly in quasi-Newtonian methods, that is, there is no need to find second-order partial derivatives. Instead, the Hessian is calculated approximately from the previous approximations. One of the most effective quasi-Newtonian methods is the BFGS method; the advantage of this algorithm is the simplicity of software implementation. However, the main disadvantage of using this method is the increase in the number of iterations to find the minimum of the objective function. This disadvantage can be mitigated by the fact that, due to the performance of modern computers when solving simple optimization problems, the increase in the number of approximations does not become noticeable in time for the user. The data obtained during the iterative process are presented in Table 7.

It should be noted that the methods that use derivatives of higher orders in the search process have a wide range of applications in geodesy and surveying. To test the possibility of implementing the second-order Newton’s method in surveying and geodetic production, the trilateration network was equalized. In the course of solving this problem, the main advantages of the method were confirmed; namely, a high convergence rate (compared to methods using first-order derivatives) and the possibility of using rough values of preliminary parameters for an iterative process. On the other hand, the main disadvantages of the method were also confirmed: it is a highly complex computational process (formation of the Hessian matrix and control of its sign). To reduce the influence of the disadvantages of this method, a software algorithm was created based on the second-order Newton’s method and on direct search methods, in particular the Powell method and the Davis–Sven–Kempy (DSK) method. The use of this software algorithm will enhance the positive aspects of the second-order Newton’s method; namely, to reduce the dependence on the preliminary values of the determined parameters. It would be convenient for the user to use an algorithm in which the number of iterations does not depend on the preliminary values of the parameters being determined. The main reason for combining the second-order Newton’s method with direct search methods is to increase the potential of the method in terms of increasing the speed of the optimization process. The prospect of further research is to expand the range of problems to be solved by a modified second-order Newton’s method and to study its efficiency and productivity in new conditions.

Conceptualization, investigation, methodology, and software, D.B. Formal analysis, writing—review and editing, M.M. All authors have read and agreed to the published version of the manuscript.

The research was carried out at the expense of a subsidy for the implementation of the state task in the field of scientific activity for 2021 № FSRW-2020-0014.

Data sharing is not applicable to this article.

The authors declare no conflict of interest.

- Abu, D.I. Mathematical Processing and Analysis of the Accuracy of Ground Spatial Geodetic Networks by Methods of Nonlinear Programming and Linear Algebra. Ph.D. Thesis, Polotsk State University, Novopolotsk, Belarus, 1998; 142p. [Google Scholar]
- Nikonov, A.; Kosarev, N.; Solnyshkova, O.; Makarikhina, I. Geodetic base for the construction of ground-based facilities in a tropical climate. In Proceedings of the E3S Web of Conferences; EDP Sciences: Les Ulis, France, 2019; Volume 91, p. 7019. [Google Scholar] [CrossRef]
- Liu, B.; Wei, Y.; Zhi, S.; Zhao, W.; Lin, J. Optimization of location of robotic total station in 3D deformation monitoring of multiple points. In Information Technology in Geo-Engineering; Springer Series in Geomechanics and Geoengineering; Springer: Cham, Switzerland, 2018; pp. 730–737. [Google Scholar] [CrossRef]
- Liu, G.H. Recovering 3D shape and motion from image sequences using affine approximation. In Proceedings of the 2009 Second International Conference on Information and Computing Science, Manchester, UK, 21–22 May 2009; IEEE: Piscataway, NJ, USA, 2009; Volume 2, pp. 349–352. [Google Scholar] [CrossRef]
- Suzuki, T. Position and attitude estimation by multiple GNSS receivers for 3D mapping. In Proceedings of the 29th International Technical Meeting of the Satellite Division of The Institute of Navigation (ION GNSS + 2016), Portland, OR, USA, 12–16 September 2016; Volume 2, pp. 1455–1464. [Google Scholar]
- Vasilieva, N.V.; Boykov, A.V.; Erokhina, O.O.; Trifonov, A.Y. Automated digitization of radial charts. J. Min. Inst.
**2021**, 247, 82–87. [Google Scholar] [CrossRef] - Kopylova, N.S.; Starikov, I.P. Methods of displaying geospatial information using cartographic web technologies for the arctic region and the continental shelf. Geod. Kartogr.
**2021**, 971, 15–22. [Google Scholar] [CrossRef] - Kovyazin, V.F.; Lepikhina, O.Y.; Zimin, V.P. Prediction of Cadastral Value of Land in a Single-Industry Town by the Regression Model; Bulletin of the Tomsk Polytechnic University; Kovyazin, V.F., Ed.; Tomsk Polytechnic University, Geo Assets Engineering: Tomsk Oblast, Russia, 2017; Volume 328, pp. 6–13. [Google Scholar]
- Rybkina, A.M.; Demidova, P.M.; Kiselev, V.A. Analysis of the application of deterministic interpolation methods for land cadastral valuation of low-rise residential development of localities. Int. J. Appl. Eng. Res.
**2017**, 12, 10834–10840. [Google Scholar] - Rybkina, A.M.; Demidova, P.M.; Kiselev, V. A working-out of the geostatistical model of mass cadastral valuation of urban lands evidence from the city Vsevolozhsk. Int. J. Appl. Eng. Res.
**2016**, 11, 11631–11638. [Google Scholar] - Kuzin, A.A.; Kovshov, S.V. Accuracy evaluation of terrain digital models for landslide slopes based on aerial laser scanning results. Ecol. Environ. Conserv.
**2017**, 23, 908–914. [Google Scholar] - Pravdina, E.A.; Lepikhina, O.J. Laser scanner data capture time management. ARPN J. Eng. Appl. Sci.
**2017**, 12, 1649–1661. [Google Scholar] - Demenkov, P.A.; Goldobina, L.A.; Trushko, O.V. Geotechnical barrier options with changed geometric parameters. Int. J. GEOMATE
**2020**, 19, 58–65. [Google Scholar] [CrossRef] - Demenkov, P.A.; Goldobina, L.A.; Trushko, V.L. The implementation of building information modeling technologies in the training of bachelors and masters at Saint Petersburg mining University. ARPN J. Eng. Appl. Sci.
**2020**, 15, 803–813. [Google Scholar] - Goldobina, L.A.; Demenkov, P.A.; Trushko, O.V. Ensuring the safety of construction and installation works during the construction of buildings and structures. J. Min. Inst.
**2019**, 239, 583–595. [Google Scholar] [CrossRef] - Katuntsov, E.; Kosarev, O. Correlation processing of radar signals with multilevel quantization. Res. J. Appl. Sci.
**2016**, 11, 624–627. [Google Scholar] - Kochneva, A.A.; Kazantsev, A.I. Justification of quality estimation method of creation of digital elevation models according to the data of airborne laser scanning when designing the motor ways. J. Ind. Pollut. Control.
**2017**, 33, 1000–1006. [Google Scholar] - Karavaichenko, M.G.; Gazaleev, L.I. Numerical modeling of a double-walled spherical reservoir. J. Min. Inst.
**2020**, 245, 561–568. [Google Scholar] [CrossRef] - Gusev, V.N.; Maliukhina, E.M.; Volokhov, E.M.; Tyulenev, M.A.; Gubin, M.Y. Assessment of development of water conducting fractures zone in the massif over crown of arch of tunneling (construction) climate. Int. J. Civ. Eng. Technol.
**2019**, 10, 635–643. [Google Scholar] - Ivanik, S.A.; Ilyukhin, D.A. Hydrometallurgical technology for gold recovery from refractory gold-bearing raw materials and the solution to problems of subsequent dehydration processes. J. Ind. Pollut. Control.
**2017**, 33, 891–897. [Google Scholar] - Pan, G.; Zhou, Y.; Guo, W. Global optimization algorithm in 3D datum transformation of industrial measurement. Geomat. Inf. Sci. Wuhan Univ.
**2013**, 39, 85–89. [Google Scholar] [CrossRef] - Kozak, P.M.; Lapchuk, V.P.; Kozak, L.V.; Ivchenko, V.M. Optimization of video camera disposition for the maximum calculation precision of coordinates of natural and artificial atmospheric objects in stereo observations. Kinemat. Phys. Celest. Bodies
**2018**, 34, 313–326. [Google Scholar] [CrossRef] - Easa, S.M. Survey review space resection in photogrammetry using collinearity condition without linearization. Surv. Rev.
**2010**, 42, 40–49. [Google Scholar] [CrossRef] - Shevchenko, G.G. About adjustment of spatial geodetic networks by the search method. Geod. Cartogr.
**2019**, 80, 10–20. [Google Scholar] [CrossRef] - Shevchenko, G.G.; Bryn, M.Y. Adjustments of Correlated Values by Search Method. In IOP Conference Series Materials Science and Engineering; CATPID-2019; IOP Publishing: Kislovodsk, Russia, 2019; Volume 698, p. 44019. [Google Scholar] [CrossRef]
- Maksimov, Y.A.; Fillipovskaya, E.A. Algorithms for Solving Nonlinear Programming Problems; M. MEPhI: Moscow, Russia, 1982; 52p. [Google Scholar]
- Himmelblau, D.M. Applied Nonlinear Programming; McGraw-Hill: New York, NY, USA, 1972; 532p. [Google Scholar]
- Yan, J.; Tiberius, C.; Bellusci, G.; Janssen, G. Feasibility of Gauss-Newton method for indoor positioning. In Proceedings of the IEEE/ION Position, Location and Navigation Symposium, Monterey, CA, USA, 6–8 May 2008; pp. 660–670. [Google Scholar]
- Vasil’ev, A.S.; Goncharov, A.A. Special strategy of treatment of difficulty-profile conical screw surfaces of single-screw compressors working bodies. J. Min. Inst.
**2019**, 235, 60–64. [Google Scholar] [CrossRef] - Dennis, D. Numerical Methods of Unconstrained Optimization and Solution of Nonlinear Equations; Applied Mathematics: Philadelphia, PA, USA, 1988; 440p. [Google Scholar]
- Kantorovich, L.V. On the Newton’s Method; Works of the V.A. Steklov Mathematic Institute: Moscow, Russia, 1949; Volume 28, pp. 104–144. [Google Scholar]
- Shnitko, S.G. Algorithms for Equalization and Estimating the Accuracy of Geodetic Networks by Nonlinear Methods; PSU Bulletin; State College: Harrisburg, PA, USA, 2012; Volume 8, pp. 133–135. [Google Scholar]
- Ortega, D. Iterative Methods for Solving Nonlinear Systems of Equations with Many Unknowns; Mir: Moscow, Russia, 1975; 558p. [Google Scholar]
- Bykasov, D.A.; Zubov, A.V. Application of Newton’s Method of the Second-Order in Solving Surveying and Geodetic Problems; MineSurveying Bulletin: Moscow, Russia, 2020; Volume 5, pp. 22–26. [Google Scholar]
- Stroner, M.; Michal, O.; Enhanced, R. Maximal precision increment method for network measurement optimization. In Advances and Trends in Geodesy, Cartography and Geoinformatics; CRC Press: Boca Raton, FL, USA, 2018; pp. 101–106. [Google Scholar] [CrossRef]
- Sviridenko, A.B. A priori correction in Newton’s optimization methods. Comput. Res. Model.
**2015**, 7, 835–863. [Google Scholar] [CrossRef] - Gubaydullina, R.; Kornilov, Y.N. The application of similarity theory elements in geodesy. In Topical Issues of Rational Use of Natural Resources; CRC Press: Boca Raton, FL, USA, 2019; Volume 1, pp. 183–188. [Google Scholar] [CrossRef]
- Nemirovsky, A.S.; Yudin, D.B. Information Complexity and Efficiency of Optimization Methods; John Wiley and Sons: Hoboken, NJ, USA, 1976; 105p. [Google Scholar]
- Zubov, A.V.; Pavlov, N.S. Assessment of the Stability of Support and Deformation Surveying and Geodetic Networks; Surveyor Bulletin: Moscow, Russia, 2013; Volume 2, pp. 21–23. [Google Scholar]
- Zubov, A.V.; Pavlov, N.S. The use of the gradient method in solving geodetic problems. In Proceedings of the Interuniversity Scientific-Practical Conference. SPb. A.F. Mozhaysky Military Space Academy, Saint Petersburg, Russia, 20 September 2013; pp. 90–93. [Google Scholar]
- Zheltko, C.N. Search Method of Equalization and Estimation of the Accuracy of Unknowns in the Least Squares Method: Monograph; FSBEI HE “KubSTU”: Krasnodar, Russia, 2016; 103p. [Google Scholar]
- Mazurov, B.T. Mathematical Modeling in the Study of Geodynamics; Sibprint Agency: Novosibirsk, Russia, 2019; 360p. [Google Scholar]
- Shu, C.; Li, F.; Wang, S. Improving algorithm to compute geodetic coordinates. In Proceedings of the 2008 International Workshop on Education Technology and Training & 2008 International Workshop on Geoscience and Remote Sensing, Shanghai, China, 21–22 December 2008; IEEE: Piscataway, NJ, USA, 2008; Volume 2, pp. 340–343. [Google Scholar]
- Baran, P.I. Investigation of the accuracy of solving geodetic problems by methods of mathematical programming. Eng. Geod.
**1987**, 30, 5–8. [Google Scholar] - Budo, A.Y. Comparative Analysis of the Equalization Results Obtained Using Two Polar Methods when Processing Planned Geodetic Networks; PSU Bulletin; State College: Harrisburg, PA, USA, 2010; Volume 12, pp. 115–122. [Google Scholar]
- Sholomitskii, A.; Lagutina, E. Calculation of the accuracy of special geodetic and mine surveying networks. ISTC Earth Sci.
**2019**, 272, 10. [Google Scholar] [CrossRef] - Men’Shikov, S.N.; Dzhaljabov, A.A.; Vasiliev, G.G.; Leonovich, I.A.; Ermilov, O.M. Spatial models developed using laser scanning at gas condensate fields in the northern construction-climatic zone. J. Min. Inst.
**2019**, 238, 430–437. [Google Scholar] [CrossRef] - Kougia, V.A.; Kanashin, N.V. Determination of the connection elements between three-dimensional coordinate systems by the gradient method. In News of Higher Educational Institutions; Geodesy and Aerial Photography: Moscow, Russia, 2008; Volume 2, pp. 22–28. [Google Scholar]
- Makarov, G.V.; Khudyakov, G.I. Use of affinne coordinate conversion at the local geodetic surveys with applying of GPS-receivers. J. Min. Inst.
**2013**, 204, 15–18. [Google Scholar] - Mitskevich, V.I.; Abu, D.I. Estimation of the accuracy of spatial serifs by nonlinear programming methods. Geod. Cartogr.
**1994**, 1, 22–24. [Google Scholar] - Mitskevich, V.I. Mathematical Processing of Geodetic Networks by Nonlinear Programming Methods; PSU: Novopolotsk, Russia, 1997; 64p. [Google Scholar]
- Mitskevich, V.I.; Yaltyhov, V.V. Equalization and assessment of the accuracy of geodetic serifs under various criteria of optimality. Geod. Cartogr.
**1994**, 7, 14–16. [Google Scholar] - Shevchenko, G.G. Development of technology for geodetic monitoring of buildings and structures by the method of free stationing using the search method of nonlinear programming. Ph.D. Thesis, Emperor Alexander I St. Petersburg State Transport University SPb, Saint Petersburg, Russia, 2020; 212p. [Google Scholar]
- Eliseeva, N.N. Application of search methods in solving nonlinear optimization problems. In Collection of Materials of the XIV International Scientific-Practical Conference Dedicated to the 25th Anniversary of the Constitution of the Republic of Belarus “Modernization of the Economic Mechanism through the Prism of Economic, Legal, Social and Engineering Approaches”; BSTU: Minsk, Belarus, 2019; pp. 364–369. [Google Scholar]
- Zelenkov, G.A.; Khakimova, A.B. Approach to the development of algorithms for Newton’s optimization methods, software imple-mentation and comparison of efficiency. Comput. Res. Model.
**2013**, 5, 367–377. [Google Scholar] [CrossRef] - Mikheev, S.E. Convergence of Newton’s method on various classes of functions. Comput. Technol.
**2005**, 10, 72–86. [Google Scholar] - Chen, C.; Bian, S.; Li, S. An Optimized Method to Transform the Cartesian to Geodetic Coordinates on a Triaxial Ellipsoid; Chen, C., Ed.; Studia Geophysica et Geodaetica: Prague, Czech Republic, 2019; Volume 63, pp. 367–389. [Google Scholar]
- Kazantsev, A.I.; Kochneva, A.A. Ground of the geodesic control method of deformations of the land surface when protecting the buildings and structures under the conditions of urban infill. Ecol. Environ. Conserv.
**2017**, 23, 876–882. [Google Scholar] - Kuzin, A.A.; Valkov, V.A.; Kazantsev, A.I. Calibration of digital non-metric cameras for measuring works. JP Conf. Ser.
**2018**, 1118, 012022. [Google Scholar] [CrossRef] - Mitskevich, V.I.; Yaltyhov, V.V. Peculiarities of equalization of geodetic networks by the method of least modules. Geod. Cartogr.
**1997**, 5, 23–24. [Google Scholar]

Item | Coordinates | |
---|---|---|

$\mathit{X}$, m | $\mathit{Y}$, m | |

A | 645.112 | 426.229 |

B | 1028.568 | 857.277 |

C | 740.339 | 1333.496 |

No. | Line Name | Length, m |
---|---|---|

1 | C–2 | 492.886 |

2 | B–2 | 448.178 |

3 | A–2 | 445.726 |

4 | A–3 | 512.201 |

5 | 3–2 | 504.961 |

6 | 2–4 | 733.414 |

7 | 2–1 | 523.911 |

8 | 1–C | 534.601 |

9 | 3–1 | 654.977 |

10 | 3–4 | 482.249 |

11 | 4–1 | 456.648 |

12 | 5–A | 617.706 |

13 | 5–3 | 322.978 |

14 | 5–4 | 700.240 |

Item | Preliminary Coordinates | Calculated Coordinates | ||||
---|---|---|---|---|---|---|

Second-Order Newton’s Method | Conjugate Gradient Method | |||||

$\mathit{X}$, m | $\mathit{Y}$, m | $\mathit{X}$, m | $\mathit{Y}$, m | $\mathit{X}$, m | $\mathit{Y}$, m | |

1 | 210.000 | 1235.000 | 213.736 | 1241.368 | 213.763 | 1241.430 |

2 | 575.000 | 860.000 | 580.501 | 867.247 | 580.515 | 867.261 |

3 | 150.000 | 580.000 | 159.346 | 588.653 | 159.363 | 588.730 |

4 | −135.000 | 950.000 | −146.870 | 961.206 | −146.851 | 961.283 |

5 | 40.000 | 285.000 | 43.240 | 287.266 | 43.042 | 287.478 |

NoI ^{1} | 3 | 389 | ||||

OF ^{2} | 859.468 | $1.318\times {10}^{-7}$ | $2.769\times {10}^{-2}$ | |||

CT ^{3} | 25.5 s | 48.8 s |

Item | Preliminary Coordinates | Calculated Coordinates | ||||
---|---|---|---|---|---|---|

Second-Order Newton’s Method | Conjugate Gradient Method | |||||

$\mathit{X}$, m | $\mathit{Y}$, m | $\mathit{X}$, m | $\mathit{Y}$, m | $\mathit{X}$, m | $\mathit{Y}$, m | |

1 | 10.000 | 10.000 | 213.737 | 1241.370 | 213.517 | 1242.222 |

2 | 10.000 | 10.000 | 580.501 | 867.248 | 580.685 | 867.828 |

3 | 10.000 | 10.000 | 159.347 | 588.656 | 159.880 | 589.763 |

4 | 10.000 | 10.000 | −146.869 | 961.209 | −146.365 | 962.073 |

5 | 10.000 | 10.000 | 43.237 | 287.272 | 43.290 | 289.190 |

NoI ^{1} | 11 | 586 | ||||

OF ^{2} | 859.468 | $2.245\times {10}^{-5}$ | 2.413 | |||

CT ^{3} | 42.5 s | 59.8 s |

Item | Preliminary Coordinates | Calculated Coordinates | ||||
---|---|---|---|---|---|---|

Second-Order Newton’s Method | Conjugate Gradient Method | |||||

$\mathit{X}$, m | $\mathit{Y}$, m | $\mathit{X}$, m | $\mathit{Y}$, m | $\mathit{X}$, m | $\mathit{Y}$, m | |

1 | 210.000 | 1235.000 | 213.736 | 1241.367 | 213.762 | 1241.379 |

2 | 575.000 | 860.000 | 580.500 | 867.246 | 580.515 | 867.246 |

3 | 150.000 | 580.000 | 159.346 | 588.652 | 159.352 | 589.665 |

4 | −135.000 | 950.000 | −146.869 | 961.205 | −146.406 | 961.591 |

5 | 40.000 | 285.000 | 43.231 | 287.275 | 43.300 | 289.085 |

NoI ^{1} | 103 | 1233 | ||||

OF ^{2} | 859.468 | $1.047\times {10}^{-3}$ | 1.025 | |||

CT ^{3} | 49.1 s | 100.1 s |

Item | Preliminary Coordinates | Calculated Coordinates | |||
---|---|---|---|---|---|

Modified Second-Order Newton’s Method | |||||

$\mathit{X}$, m | $\mathit{Y}$, m | $\mathit{X}$, m | $\mathit{Y}$, m | ||

1 | 0.000 | 0.000 | 213.736 | 1241.368 | |

2 | 0.000 | 0.000 | 580.501 | 867.247 | |

3 | 0.000 | 0.000 | 159.346 | 589.653 | |

4 | 0.000 | 0.000 | −146.870 | 961.206 | |

5 | 0.000 | 0.000 | 43.240 | 287.266 | |

NoI ^{1} | 28 | ||||

OF ^{2} | $5.730\times {10}^{-4}$ | ||||

CT ^{3} | 98.7 s | ||||

RS ^{4} | ${m}_{{x}_{1}}/{m}_{{y}_{1}},\mathrm{mm}$ | $1.047\times {10}^{-3}$/$5.180\times {10}^{-4}$ | |||

${m}_{{x}_{2}}/{m}_{{y}_{2}},\mathrm{mm}$ | $4.080\times {10}^{-4}$/$3.980\times {10}^{-4}$ | ||||

${m}_{{x}_{3}}/{m}_{{y}_{3}},\mathrm{mm}$ | $5.090\times {10}^{-4}$/$5.300\times {10}^{-4}$ | ||||

${m}_{{x}_{4}}/{m}_{{y}_{4}},\mathrm{mm}$ | $5.400\times {10}^{-4}$/$6.870\times {10}^{-4}$ | ||||

${m}_{{x}_{5}}/{m}_{{y}_{5}},\mathrm{mm}$ | $6.56\times {10}^{-4}$/$7.84\times {10}^{-4}$ |

Item | Preliminary Coordinates | Calculated Coordinates | ||||
---|---|---|---|---|---|---|

Second-Order Newton’s Method | BFGS | |||||

$\mathit{X}$, m | $\mathit{Y}$, m | $\mathit{X}$, m | $\mathit{Y}$, m | $\mathit{X}$, m | $\mathit{Y}$, m | |

1 | 0.000 | 0.000 | 213.736 | 1241.368 | 288.806 | 1070.175 |

2 | 0.000 | 0.000 | 580.501 | 867.247 | 652.085 | 777.799 |

3 | 0.000 | 0.000 | 159.346 | 589.653 | 201.616 | 380.299 |

4 | 0.000 | 0.000 | −146.870 | 961.206 | −63.580 | 810.951 |

5 | 0.000 | 0.000 | 43.240 | 287.266 | 71.944 | 39.043 |

NoI ^{1} | 28 | 144 | ||||

OF ^{2} | 859.468 | $5.730\times {10}^{-4}$ | $44.863\times {10}^{3}$ | |||

CT ^{3} | 98.7 s | 122.1 s |

Publisher’s Note: MDPI stays neutral with regard to jurisdictional claims in published maps and institutional affiliations. |

© 2021 by the authors. Licensee MDPI, Basel, Switzerland. This article is an open access article distributed under the terms and conditions of the Creative Commons Attribution (CC BY) license (https://creativecommons.org/licenses/by/4.0/).