Next Article in Journal
Model Reduction for Discrete-Time Systems via Optimization over Grassmann Manifold
Previous Article in Journal
An Online Correction Method for System Errors in the Pipe Jacking Inertial Guidance System
Previous Article in Special Issue
Matrix-Based ACO for Solving Parametric Problems Using Heterogeneous Reconfigurable Computers and SIMD Accelerators
 
 
Font Type:
Arial Georgia Verdana
Font Size:
Aa Aa Aa
Line Spacing:
Column Width:
Background:
Article

An Accelerated Diagonally Structured CG Algorithm for Nonlinear Least Squares and Inverse Kinematics

by
Rabiu Bashir Yunus
1,2,*,
Anis Ben Ghorbal
3,
Nooraini Zainuddin
1 and
Sulaiman Mohammed Ibrahim
4,5
1
Department of Applied Science, Faculty of Science, Management & Computing, Universiti Teknologi PETRONAS, Bandar Seri Iskandar 32610, Perak Darul Ridzuan, Malaysia
2
Department of Mathematics, Faculty of Computing and Mathematical Sciences, Aliko Dangote University of Science and Technology, Wudil 713101, Kano, Nigeria
3
Department of Mathematics and Statistics, Faculty of Science, Imam Mohammad Ibn Saud Islamic University (IMSIU), Riyadh 11432, Saudi Arabia
4
School of Quantitative Sciences, Universiti Utara Malaysia (UUM), Sintok 06010, Kedah, Malaysia
5
Faculty of Education and Arts, Sohar University, Sohar 311, Oman
*
Author to whom correspondence should be addressed.
Mathematics 2025, 13(17), 2766; https://doi.org/10.3390/math13172766
Submission received: 9 July 2025 / Revised: 8 August 2025 / Accepted: 22 August 2025 / Published: 28 August 2025
(This article belongs to the Special Issue Optimization Algorithms, Distributed Computing and Intelligence)

Abstract

Nonlinear least squares (NLS) models are extensively used as optimization frameworks in various scientific and engineering disciplines. This work proposes a novel structured conjugate gradient (SCG) method that incorporates a structured diagonal approximation for the second-order term of the Hessian, particularly designed for solving NLS problems. In addition, an acceleration scheme for the SCG method is proposed and analyzed. The global convergence properties of the proposed method are rigorously established under specific assumptions. Numerical experiments were conducted on large-scale NLS benchmark problems to evaluate the performance of the method. The outcome of these experiments indicates that the proposed method outperforms other approaches using the established performance metrics. Moreover, the developed approach is utilized to address the inverse kinematics challenge in controlling the motion of a robotic system with four degrees of freedom (4DOF).
MSC:
90C06; 90C52; 90C53; 90C56; 65K05

1. Introduction

Nonlinear least squares (NLS) problems can be theoretically described as follows:
min x R n f ( x ) ,
where the definition of the function f ( x ) is given by
f ( x ) = 1 2 u ( x ) 2 = 1 2 i = 1 m ( u i ( x ) ) 2 , x R n .
NLS methods are broadly classified into two categories: first-order and second-order methods. First-order methods such as the Steepest Descent (SD), Gauss–Newton (GN), and Levenberg–Marquardt (LM) depend solely on the first derivative or gradient of the objective function for their computations. In contrast, second-order methods incorporate the Hessian matrix of the objective function or employ models and approximations of the Hessian. Second-order methods include techniques such as Newton’s method, the Quasi-Newton (QN) method, and Trust-region (TR) methods [1]. A diagram illustrating this classification is presented in Figure 1.
NLS has recently gained more recognition for its outstanding performance in various cognitive applications in computational science and engineering, including robotic motion control [2,3], image restoration [4,5,6], parameter identification [7], data fitting [8,9], and meteorological applications [10]. Moreover, it is a special kind of unconstrained optimization problem. Therefore, a general unconstrained minimization technique can be employed to generate a sequence of iterations { x k } , via the following:
x k + 1 = x k + α k d k
In the direction of the search vector d k , the step sizes α k > 0 , k 0 are obtained using a line search scheme. In inexact line search, the Wolfe line search method finds the value of α k that reduces the cost function along d k by satisfying certain conditions [11].
f ( x k + α k d k ) f ( x k ) + δ α k g k T d k ,
g ( x k + α k d k ) T d k σ g k T d k .
In contrast, the strong Wolfe line search finds the value of α k that satisfies a further requirement in addition to the condition in Equation (4).
| g ( x k + α k d k ) T d k | σ g k T d k ,
where δ , σ ( 0 , 1 ) and d k is the search direction generated using the CG method by the following formula.
d k + 1 = g k + 1 i f k = 0 g k + 1 + β k d k i f k 1 .
where the scalar factor is β k , commonly termed the CG parameter, and g ( x ) = f ( x ) denotes the gradient of the function f. However, the selection of β k significantly affects the computational efficacy of CG methods, prompting extensive research into effective parameter choices; see, for example, [12,13,14,15,16,17,18]. Furthermore, the direction d k described in Equation (7) is generally needed to meet the sufficient descent condition, which stipulates that there must be a constant c > 0 such that
g k T d k c g k 2 .
The Hestenes–Stiefel (HS) method [19], the Polak–Ribiere–Polyak (PRP) method [20,21], and the Liu–Storey (LS) method [22] share similar structures in their numerators, which yield some common computational benefits [23]. In addition, their conjugate parameters are defined as follows:
β k H S = g k + 1 T y k d k T y k ; β k P R P = g k + 1 T y k g k 2 ; β k L S = g k + 1 T y k g k T d k ,
where g k + 1 = f ( x k + 1 ) and y k = g k + 1 g k .
Due to the distinctive structure of NLS, the objective function, its gradient vector, and the Hessian matrix can be expressed as follows:
f ( x ) : = i = 1 m u i ( x ) u i ( x ) = V ( x ) T u ( x ) ,
2 f ( x ) : = i = 1 m u i ( x ) u i ( x ) T 1 st Term + i = 1 m u i ( x ) 2 u i ( x ) 2 nd Term = V ( x ) T V ( x ) + S ( x ) ,
where V ( x ) denotes the Jacobian matrix of u ( x ) and S ( x ) = i = 1 m u i ( x ) 2 u i ( x ) , representing the second term in Equation (9). The GN and LM methods ignore the second-order components of the Hessian matrix associated with the objective function (1). As a result, they are generally effective in solving small residual problems, but they can struggle with larger residual problems [24,25]. To address this limitation, Brown and Dennis [26] introduced the Structured Quasi-Newton (SQN) method, which takes advantage of the structure of the Hessian of the objective function in Equation (1) by combining elements of both the GN and Quasi-Newton methods. Numerically, the SQN algorithm has shown superior performance compared to GN and LM [27]. However, a significant challenge remains: the search direction generated by SQN is not guaranteed to be a descent direction for f, which complicates global convergence and limits its applicability to large-scale problems. The Quasi-Newton approach to non-negative image restoration is effectively presented in the work of Hanke et al. [28], while Hochbruck and Honig [29] provide insights into the convergence of a regularizing LM scheme for nonlinear ill-posed problems. Additionally, the recent work by Pes and Rodriguez [30] introduces a doubly relaxed minimal-norm GN method for underdetermined NLS problems.
Recent research efforts in NLS have focused on approximating the second-order term of the Hessian while preserving the efficiency and robustness of Newton-type methods. Despite their effectiveness, these approaches often require substantial matrix storage, particularly for large-scale problems [2,31]. To overcome these limitations, structured matrix-free techniques have emerged as a more efficient alternative for solving Equation (1). For instance, ref. [27] suggested a matrix-free, structured approach to large-scale NLS problems through CG search directions. Furthermore, ref. [32] introduced an approximation strategy that takes advantage of the diagonal components of the first and second terms of the Hessian matrix, incorporating a safeguarding mechanism to ensure adequate descent. More recently, ref. [33] developed a scaled variant of the SCG method based on a structured gradient relation, proving its global convergence under standard assumptions.
In another development, ref. [34] developed a structured two-term spectral CG method derived from the HS formulation, showcasing strong numerical performance. Yet, their approach requires additional safeguarding strategies, particularly when the parameter is nonpositive during certain iterations. Furthermore, ref. [35] introduced a structured adaptive technique for NLS problems based on spectral data and Barzilai–Borwein (BB) parameters. While CG methods that incorporate secant conditions may not guarantee a descent direction from a theoretical standpoint, they have demonstrated strong practical performance in tackling large-scale unconstrained optimization problems [36]. Hence, designing a CG method that efficiently integrates second-order information from the objective function is crucial to ensuring a descent search direction and enhancing overall performance.
In this paper, we build on the concept of the accelerated gradient method and the diagonal estimate of the objective function’s Hessian matrix. We introduce a new formula for β k by leveraging the structure of the Hessian in NLS problems. This research makes the following main contributions:
  • This work proposes a novel SCG method that incorporates a structured diagonal approximation of the second-order term of the Hessian, combined with an acceleration scheme.
  • The resulting search directions are proven to satisfy the sufficient descent condition.
  • The proposed method is shown to have global convergence properties, relying on a strong Wolfe line search strategy and mild assumptions.
  • Numerical experiments are conducted on a broad scale to evaluate the performance of the proposed method against existing methods.
  • To illustrate its practical use, the SCG algorithm is implemented to solve the inverse kinematics of a robotic problem with 4DOF.
The structure of this paper is outlined below: Section 2 presents the newly developed method along with its algorithmic framework. Section 3 explores the theoretical global convergence behavior of the proposed algorithm under specific assumptions. In Section 4, numerical simulations are conducted to evaluate the performance of the methods in comparison with existing techniques. Lastly, Section 5 demonstrates the application of the proposed strategy to an inverse kinematic robotic motion control problem involving a system with 4DOF.

2. Formulation of the Diagonally Structured Conjugate Gradient with Acceleration Scheme

This section introduces the accelerated diagonally SCG method, which is designed to enhance the computational efficiency of standard CG algorithms. The technique incorporates a diagonal structure to approximate second-order information. Unlike the work presented in [27,31], our method lies in the introduction of an adaptive acceleration parameter, denoted as η k , which is derived via a higher-order Taylor series expansion. This parameter is dynamically computed at each iteration and serves to enhance the convergence rate by adaptively modifying the conjugate direction updates.

2.1. The Diagonally Structured CG Coefficient

Consider approximating the second-order term of the Hessian with a diagonal matrix
W k = Z k 1 S k 1 1
where Z k 1 = d i a g ( z k 1 , z k 2 , , z k n ) and S k 1 = d i a g ( s k 1 , s k 2 , , s k n ) ; this is equivalent to a specific form of the secant equation:
W k S k 1 = Z k 1 .
Let z k 1 = ( z k 1 , z k 2 , , z k n ) and s k 1 = ( s k 1 , s k 2 , , s k n ) be two vectors, where the structured vector is defined by
z k 1 = V k T V k s k 1 + ( V k V k 1 ) T u k .
But W k = d i a g ( w k 1 , , w k n ) , with w k i z k 1 i / s k 1 i . However, if s k 1 i = 0 then w k is not well-defined, so we devise another definition for the diagonal entries of W k :
w k i = z k 1 i s k 1 i , i f s k 1 i 0 and ϵ l z k 1 i s k 1 i ϵ u , 1 , otherwise .
In our implementation, we follow the strategy adopted in [31] with slight modification, which suggests the practical use of ϵ l = 10 5 and ϵ u = 10 1 . These values are chosen to allow for a broad but safe range for the ratio z k 1 i s k 1 i , accommodating reasonable curvature information while avoiding extreme values that could distort the search direction.
The following defines the proposed diagonally SCG method search direction:
d k = W k g k i f k = 0 , W k g k + β k * d k 1 i f k 1 .
where β k * is defined as follows:
β k * = g k T d k 1 d k 1 2

2.2. The Acceleration Scheme

Motivated by the work presented by Andrei [37], which was originally designed to solve unconstrained optimization problems, we present the scheme for the SCG formula. Let x k + 1 = x k + α k d k , where s k = α k d k , and consider Equation (4) as follows:
f ( x k + α k d k ) f ( x k ) + δ α k g k T d k
The enhanced accelerated conjugate gradient algorithm can be introduced via the following iterative framework:
x k + 1 = x k + η k α k d k
where η k > 0 is a parameter that needs to be formulated to enhance the algorithm’s performance. Presently, by higher-order Taylor expansion, we have the following:
f ( x k + α k d k ) f ( x k ) + α k g k T d k + 1 2 ! α k 2 d k T 2 f ( x k ) d k + o ( α k d k 2 ) ,
Conversely, when η > 0 , we derive the higher-order Taylor expansion of
f ( x k + η α k d k ) f ( x k ) + η α k g k T d k + 1 2 ! η 2 α k 2 d k T 2 f ( x k ) d k + o ( η α k d k 2 ) .
We can combine Equations (16) and (17) as follows:
f ( x k + η α k d k ) = f ( x k + α k d k ) + λ k ( η )
where
λ k ( η ) = 1 2 ( η 2 1 ) α k 2 d k T 2 f ( x k ) d k + ( η 1 ) α k g k T d k + η 2 α k o ( α k d k 2 ) α k o ( α k d k 2 )
Now, suppose that
a k = α k g k T d k , b k = α k 2 d k T 2 f ( x k ) d k , ε k = o ( α k d k 2 ) .
Therefore, Equation (19) becomes
λ k ( η ) = 1 2 ( η 2 1 ) b k + ( η 1 ) a k + η 2 α k ε k α k ε k
λ k ( η ) = η b k + a k + 2 η α k ε k
and
λ k ( η k ) = 0
implies that
η k = a k b k + 2 α k ε k
It can be seen that Equation (22) is a minimizer of λ k , provided that b k + 2 α k ε k > 0 . As discussed in [37], the contribution of ε k can be ignored in Equation (22). Also, b k can be approximated by b k = α k y k T d k , in which y k = g k g w with w k = x k + α k d k . Hence, η k is computed as follows:
η k = a k b k , i f b k > 0 , 1 , otherwise .

3. Global Convergence Analysis

In this section, we utilize Equation (14) under certain assumptions to demonstrate the convergence properties at a global scale for the proposed algorithm. For this section, we assume that g k 0 , k . Concerning the objective function, we adopt the following assumptions:
Assumption 1.
The set of levels = { x R n , | f ( x ) f ( x 0 ) } lies within a bounded domain. Consequently, there exists a positive constant b ¯ such that
x b ¯ f o r e v e r y x .
Assumption 2.
The Jacobian matrix V ( x ) exhibits Lipschitz continuity, and u ( x ) is continuously differentiable within a given neighborhood N of ℓ. Specifically, a positive constant a 1 exists such that
V ( x ) V ( y ) a 1 x y , x , y N .
To prove the following lemmas and theorems, we now give estimates for u ( x ) , V ( x ) , and f ( x ) based on Assumptions 1 and 2. By applying Equations (24) and (25), it follows that
V ( x ) V ( x ) V ( x 0 ) + V ( x 0 ) a 1 x x 0 + V ( x 0 ) 2 a 1 x x 0 + V ( x 0 )
There exists a 2 > 0 , such that for any x ,
V ( x ) a 2 for all x .
Furthermore, using the mean value theorem and Equation (25), we have for all x
u ( x ) u ( y ) V ( y ) ( x y ) 0 1 V ( y + ξ ( x y ) ) V ( y ) x y d ξ , a 1 x y 2 0 1 ξ d ξ = a 1 2 x y 2 .
We decompose the difference u ( x ) u ( y ) as follows:
u ( x ) u ( y ) = u ( x ) u ( y ) V ( y ) ( x y ) + V ( y ) ( x y ) ,
which leads to the inequality
u ( x ) u ( y ) u ( x ) u ( y ) V ( y ) ( x y ) + V ( y ) ( x y ) ,
This implies that
u ( x ) u ( y ) a 1 2 x y 2 + V ( y ) ( x y ) a 1 2 x y + V ( y ) x y ( a 1 b ¯ + a 2 ) x y
From the above inequality, it follows that there exists b 1 > 0 such that
u ( x ) u ( y ) b 1 x y for all x , y .
Hence, there exists b 2 > 0 such that
u ( x ) b 2 for all x .
f ( x ) f ( y ) = V ( x ) T ( u ( x ) u ( y ) ) + ( V ( x ) V ( y ) ) T u ( y ) V ( x ) u ( x ) u ( y ) + V ( x ) V ( y ) u ( y ) b 1 V ( x ) x y + a 1 x y u ( y ) ( b 1 a 2 + a 1 b 2 ) x y
Therefore, from Equation (29), it follows that a constant c 1 : = ( b 1 a 2 + a 1 b 2 )
f ( x ) f ( y ) c 1 x y , x , y .
which implies that c 2 > 0 such that
f ( x ) c 2 , x .
Assumption 3.
The gradient of Equation (2), denoted by g ( x ) = V ( x ) T u ( x ) , exhibits uniform continuity in an open convex domain that contains the level set ℓ.
Lemma 1.
Suppose that Assumptions 1 and 2 are satisfied. Let x k and d k be sequences produced by Algorithm 1. Then, a positive constant L exists, ensuring that the following inequality is true for k 0 ,    z k 1 L s k 1 .
Proof. 
From the definition of the structured vector defined in Equation (11), we obtain
z k 1 = V k T V k s k 1 + ( V k V k 1 ) T u k V k T V k s k 1 + ( V k V k 1 ) T u k V k 2 s k 1 + V k V k 1 u k a 2 2 s k 1 + a 1 x k x k 1 u k a 2 2 s k 1 + a 1 b 2 s k 1 = ( a 2 2 + a 1 b 2 ) s k 1 .
Therefore, setting L : = a 2 2 + a 1 b 2 , we obtain the inequality.□
The lemma that follows shows that the sufficient descent requirement is satisfied by the direction d k , independent of the line search requirement.
Algorithm 1: Diagonally Structured Conjugate Gradient with Acceleration (DSCGA)
Step 1: Select the starting point x 0 from the domain of f. Set d 0 = g 0 and k = 0 . Scalars δ , σ ( 0 , 1 ) , and ϵ g , ϵ f , ϵ l , ϵ u > 0 . Compute u 0 = u ( x 0 ) , V 0 = V ( x 0 ) , and g 0 = V 0 T u 0 .
Step 2: If g k ϵ , stop; otherwise, proceed to Step 3.
Step 3: Determine α k using Equations (4) and (6).
Step 4: Calculate w = x k + α k d k , g w = f ( w ) , and y k = g k g w .
Step 5: Evaluate η k = a k b k , and rescale the search direction d ^ k = η k α k d k . Update x k + 1 = x k + α k d ^ k , where α k ( 0 , 1 ] . Else set x k + 1 = w .
Step 6: Compute s k 1 = x k x k 1 and z k 1 = V k T V k s k 1 + ( V k V k 1 ) T u k , with V k = V ( x k ) and u k = u ( x k ) . Form W k = diag ( w k 1 , , w k n ) , with
w k i = z k 1 i s k 1 i , i f s k 1 i = 0 and ϵ l z k 1 i s k 1 i ϵ u 1 , otherwise ,
Step 7: Compute β k * using Equation (14).
Step 8: Evaluate d k = W k g k + β k * d k 1
Step 9: Update k : = k + 1 and return to Step 2.
Lemma 2.
Assume that all entries of the diagonal matrix W k = diag ( w k 1 , , w k n ) satisfy the safeguarding condition 0 < w k i ε u for all i = 1 , , n and iteration k, where ε u > 0 is a constant. Then, the search direction d k defined in Equation (13) satisfies the descent condition:
g k T d k c g k 2 .
Note: 
In the proof below, we assume that each diagonal entry of W k satisfies 0 < w k i ε u , as enforced by the safeguarding condition in Equation (12). This ensures that 1 w k i 1 ε u , which is used to derive the lower bound in the descent property.
Proof. 
It follows for k = 0 that
g 0 T d 0 = g 0 T W 0 g 0 = g 0 2 .
g k T d k = g k T ( W k g k + β k * d k 1 ) , = g k T diag 1 w k 1 , 1 w k 2 , , 1 w k n g k + g k T d k 1 d k 1 2 g k T d k 1 , 1 ε u g k 2 + | g k T d k 1 | d k 1 2 | g k T d k 1 | , 1 ε u g k 2 + g k d k 1 d k 1 2 g k d k 1 , 1 ε u 1 g k 2 .
Thus, defining c as
c : = 1 ε u 1 ,
we obtain g k T d k c g k 2 . Hence, the proof is completed.□
Lemma 3.
Let { x k } and { d k } be the sequences defined by Algorithm 1. If ϵ > 0 is such that for every k 1 , g ( x k 1 ) ϵ , then κ > 0 is such that
d k κ g k .
Proof. 
If k = 0 , then d 0 = W 0 g 0 W 0 g 0 n g 0 .
If k 1 , then it follows that
d k = W k g k + β k * d k 1 , W k g k + | β k * | d k 1 ,
Taking the absolute value of the parameter β k * from Equation (13) yields the following result:
| β k * | = g k T d k 1 d k 1 2 g k d k 1 d k 1 2 = g k d k 1 .
It is important to note that the derivation in Equation (35) assumes the non-negativity of the parameter β k * , as defined in Equation (13). This assumption is justified by the fact that β k * is computed as a scalar projection of the gradient g k onto the previous direction d k 1 , normalized by the squared norm d k 1 2 , which is strictly positive. In practical implementations, β k * tends to remain non-negative when the algorithm is in the vicinity of a minimizer and the directions d k preserve a descent component. Moreover, should β k * ever become negative, it would still be controlled in the analysis by taking its absolute value, as shown in Equation (35). This ensures the bound remains valid regardless of the sign, but in our convergence analysis, we adopt the non-negativity assumption to simplify the derivation of norm bounds.
Substituting Equation (35) into Equation (34), we obtain the following:
d k W k g k + g k d k 1 d k 1 , n g k + g k , = n + 1 g k .
Therefore, defining κ as
κ : = n + 1 ,
we obtain d k κ g k . Hence, we have the result.□
Lemma 4.
Based on Assumptions 1 and 2, we have
lim k α k g k T d k = 0
Proof. 
Let x k + 1 be the new iterate produced by the DSCGA method, satisfying the inequality
f ( x k + 1 ) f ( x k ) + δ α k g k T d k .
Since the sequence { f ( x k ) } is monotonically decreasing and has a lower bound of 0, it converges to some limit, say f * . Taking the limit inferior on both sides of the inequality yields
lim inf k f ( x k + 1 ) lim inf k f ( x k ) + δ lim inf k α k g k T d k .
Since f ( x k ) f * , we conclude that
lim inf k α k g k T d k 0 .
On the other hand, since d k is a descent direction, it implies that g k T d k < 0 for all k 0 . This implies
lim sup k α k g k T d k 0 .
Thus, we conclude
lim sup k α k g k T d k 0 lim inf k α k g k T d k .
From this, we deduce that
lim k α k g k T d k = 0 .
The following lemma is essential to establish the global convergence of the DSCGA method under the Wolfe line search strategy. Its proof is provided in [38].
Lemma 5.
Assume that the sequences { x k } , { g k } , and { d k } are produced by Algorithm 1, where α k is chosen according to the search conditions of the Wolfe line (4) and (6). If Assumptions 1 and 2 hold, then the following summation is finite:
k = 1 + ( g k T d k ) 2 d k 2 < + .
Using Lemma 2 and Equation (43), we arrive at the following corollary.
Corollary 1.
Assume that the sequences { x k } , { g k } , and { d k } are produced by Algorithm 1, where α k is obtained using the Wolfe condition. If Assumptions 1 and 2 hold, then the following series is convergent:
k = 1 + g k 4 d k 2 < + .
Building on the earlier lemmas, we present the following global convergence result using the following theorem.
Theorem 1.
Suppose that the sequences { x k } , { d k } , and { g k } are generated by Algorithm 1, where α k is obtained according to the Wolfe conditions (4) and (6). If Assumption 1 holds, then
lim k + g k = 0 .
Proof. 
One can directly deduce from Equation (33) that
1 κ 2 g k 2 d k 2 .
When both sides of this inequality are multiplied by g k 2 , we obtain the following:
g k 2 κ 2 g k 4 d k 2 .
Summing over k, thus, we have the following:
1 κ 2 k = 1 + g k 2 k = 1 + g k 4 d k 2 .
From the inequality above and Equation (44), we deduce that
1 κ 2 k = 1 + g k 2 < + .
This implies that
lim k + g k 2 = 0 .
Thus, we obtain
lim k + g k = 0 ,
This concludes the proof. □

4. Numerical Results

This section assesses the computational efficiency of the proposed algorithm by benchmarking it against two established methods: SNCG [33] and SSGM [39]. The evaluation metrics include the number of iterations (It), the number of function evaluations (Fe), the number of gradient evaluations (Ge), and the total CPU runtime. For the proposed algorithm, the parameters are configured as ρ = 0.9 , σ = 0.0001 , ϵ l = 10 5 , and ϵ u = 10 1 , while the parameters for SNCG and SSGM are directly sourced from their respective references [33,39]. The benchmark test functions used for this comparison are described in full in Table 1.
To evaluate the performance of the three algorithms, we employed the performance profiles introduced by Dolan and More [45]. These profiles were used to assess the algorithms based on the number of iterations, as well as the total number of function and gradient evaluations performed. In their work, Dolan and More [45] proposed a methodology for evaluating and comparing the performance of a solver s over a set of problems f. This method involves n s solvers and n f problems, as described below, and determines the computational cost q for each solver–problem pair:
q f , s = cos t of solving problem a using solver s .
Based on the computed cost q f , s , [45] proposed a metric to evaluate solver efficiency. The performance ratio, which serves as a comparative measure, is defined as
r f , s = q f , s min { q f , s : s S } .
Additionally, the cumulative distribution function of performance can be expressed as
ρ s ( τ ) = 1 n f size { f F : log 2 ( r f , s ) τ } .
This methodology facilitates the generation of performance profile graphs for each solver s S , leveraging the available data. These performance profiles illustrate the proportion ρ s ( τ ) of problems that a solver can solve within a factor τ of the best-performing solver. A solver is considered more efficient if it achieves a higher ρ s ( τ ) for a given τ . Consequently, the solver that maintains the highest performance profile curve across all values of τ is deemed the most efficient.
All algorithms were implemented and executed in MATLAB R2022a, and experiments were conducted on a Windows 11 platform equipped with a 10th Gen Intel Core i7-1065G7 processor (Santa Clara, CA, USA), with 16GB of RAM and a 512GB SSD. The Wolfe line search method was employed for the experiments, and the Wolfe condition parameters for the SNCG and SSGM algorithms adhered to the specifications outlined in their respective original studies. The termination criterion for all algorithms was defined as g ( x k ) < ϵ , where ϵ = 10 5 , or if any of the following conditions were met:
  • The algorithm ran for over 1000 iterations.
  • More than 5000 evaluations of the function were performed.
If an algorithm was unable to solve a specific problem, it was marked as a failure, and the failure point was indicated by a symbol (⋆) in the table. A failure was defined as one of the following conditions: exceeding the maximum number of iterations, failing to reach the predefined convergence tolerance, or encountering numerical issues such as divergence or undefined values (e.g., NaNs or Infs).
From Table 2 and Table 3, we observe that of the 125 instances, DSCGA solves all (about 100%), while SNCG fails in 30 instances (about 24%) and SSGM fails in 10 (about 8%) problems. For example, as shown in Figure 2, when τ = 1 , the DSCGA algorithm successfully solves approximately 96% of the test problems using a lower iteration count. In comparison, SSGM solves about 63%, and SNCG solves around 52% of the problems. As illustrated in Figure 3, for τ = 1 , the DSCGA algorithm solves nearly 92% of the test problems with the fewest function evaluations. In contrast, SSGM and SNCG address approximately 63% and 45% of the problems, respectively, under the same metric. Figure 4 for τ = 1 shows that DSCGA effectively solves approximately 94% of the test problems with the fewest gradient evaluations. In comparison, SSGM and SNCG achieve success rates of 60% and 50%, respectively, based on gradient computations. Furthermore, considering the CPU time, Figure 5 highlights that DSCGA is highly efficient, solving around 71% of test problems in the least amount of time when τ = 1 . In contrast, SSGM and SNCG address approximately 58% and 54% of the problems, respectively, under the same criterion. An important insight from these figures is that, while all algorithms exhibited comparable performance initially, the proposed method consistently outperformed the others in all metrics as τ increased. This underscores the robustness of the method and its strong competitive edge in solving the problems under study.

5. Applications in Inverse Kinematics

This section focuses primarily on the equation that governs the discrete kinematic model with four degrees of freedom, as described in the following. This equation characterizes the planar kinematic model of a four-joint system.
p ( θ ) = i = 1 4 l i cos ( θ 1 + θ 2 + + θ i ) i = 1 4 l i sin ( θ 1 + θ 2 + + θ i ) ,
Here, p ( · ) represents the kinematic transformation, capturing the positioning of a robot’s endpoint or any specific component as a function of active adjustments in its joint angles, denoted by θ R 4 . For i = 1 , 2 , 3 , 4 , the link is represented by l i , which is the length of the relevant link. The position and orientation of the robot’s end effector are generally represented by p ( θ ) in the framework of robotic motion. On the other hand, this particular study depicts the robot’s x-y Cartesian position.
Let the vector corresponding to the intended path at a given time instant t k be represented as φ t k R 2 . To be computed at each time segment t k within the interval [ 0 , t f ] , we present the following least squares model. The following is the definition of the optimization problem:
min θ R 4 1 2 i = 1 4 p i ( θ ) φ t k , i 2 ,
where φ t k , i indicates the goal route of a Lissajous curve at t k . Additionally, several Lissajous curves in 4DOF have been used; refer to [46] for an example. The forward kinematic function p ( θ ) maps joint space to Cartesian space:
ψ = p ( θ ) ,
The inverse kinematic problem is then formulated as an NLS problem:
min θ R n 1 2 u ( θ ) 2 ,
where the residual function is defined as follows:
u ( θ ) = p ( θ ) ψ t k
The gradient is then computed by the following:
g t k = V T ( θ t k ) u ( θ t k )
where V ( θ t k ) R m × n is the Jacobian of u ( θ t k ) defined as V ( θ t k ) = δ p ( θ t k ) δ ( θ t k ) . We update the joint variables using the DSCGA iterative formula:
θ t k + 1 = θ t k + α t k d t k ,
where d t k is the search direction of the DSCGA formula:
d t k D S C G A = W t k g t k + β t k * d t k 1 , t k 1 .
To simulate the results and solve the inverse kinematics problem, the following pseudo-code was utilized.
The following parameters were utilized in the process to put the strategy into practice and simulate the outcomes:
  • The initial joint angular vector is θ t 0 = 0 , π 4 , π 3 , π 2 at the starting time t 0 = 0 .
  • The length of the links is represented by l i , where i = 1 , , 4 .
  • The task should take t f = 10 seconds in total.
The selected Lissajous curve trajectories exhibit nonlinear and time-varying characteristics due to the presence of sinusoidal components in both the x and y directions. By varying the frequencies and phase shifts, the resulting paths are oscillatory, non-repetitive, and highly dynamic, making them particularly challenging for trajectory tracking, especially in inverse kinematics and motion control. These conditions often lead to ill-conditioned Jacobians and require solvers to handle rapid variations with high precision.
Problem 26.
The motion of the end-effector is defined by a Lissajous pattern, as given below.
φ t k = 3 2 + 0.3 sin ( 4 t k + 2 π 3 ) 3 2 + 0.3 cos ( 3 t k + 2 π 3 ) .
The results from the simulation of problem 26 are shown in the following figures. Figure 6a displays the robot trajectories generated using Algorithm 2. Figure 6b illustrates the optimal synthesis of the robot paths for the given task. Furthermore, Figure 7a,b present the residual errors for each method used in the numerical analysis.
Algorithm 2: Solution of the 4DOF Model Using the DSCGA Method
Step 1: Inputs: t 0 , θ t 0 , t f , g, and K max
Step 2: For k = 1 to K max , repeat
t k = k · g ;
Step 3: Compute φ t k , i ;
Step 4: Calculate θ t k using the DSCGA θ t 0 , φ t k , i , as detailed in Algorithm 1;
Step 5: Set θ new = θ t 0 ; θ t k ;
Step 6: Return: θ new
The findings of Problem 26 are summarized in Table 4, which also includes residual tracking error, CPU time, and the number of iterations. With just 52 rounds needed, DSCGA shows the best performance among the methods examined. The iterations for SSGM and SNCG are as follows: 81 and 77, respectively. The other approaches, SSGM and SNCG, are slower than DSCGA, leading to computational efficiency, achieving the problem in the shortest CPU time of 0.299 s.
The residual error norm rates are shown in Figure 7a,b, which provide a comparison of the performance of the algorithms. Having the lowest observed error of roughly 10 6 , the DSCGA algorithm exhibits better accuracy. The SSGM follows, with error rates of about 10 5 and 10 5 , respectively. These results highlight how the DSCGA algorithm reduces residual error more precisely than the other approaches.
Problem 27.
The following Lissajous curve is described by
φ t k = 3 2 + 1 5 sin ( t k ) 3 2 + 1 5 sin ( 4 t k ) .
The simulation results for Problem 27 are shown in the figures below. Figure 8a depicts the robot paths generated by Algorithm 2, while Figure 8b demonstrates the effective execution of the robot trajectories for the specified task. Furthermore, the residual errors for each method used in the experiment are presented in Figure 9a,b.
The results for Problem 27 are shown in Table 5, which includes the residual tracking error, CPU time, and the number of iterations. DSCGA is the most efficient method, only requiring 58 iterations to solve the problem. However, SSGM and SNCG require more iterations: 89 and 76, respectively. DSCGA also completes the task the fastest, with a minimal CPU time of 0.433 s.
The output data from the model solution is presented in the charts in Figure 8. As shown in Figure 8b, the end effector accurately followed the intended trajectory, effectively completing the task. The residual error norms are shown in Figure 9, with DSCGA achieving the lowest error rate of approximately 10 6 . Following DSCGA, SSGM and SNCG showed error rates of around 10 5 and 10 4 , respectively.

6. Conclusions

This study introduced an accelerated diagonally structured CG method designed to solve NLS optimization problems. The method guarantees a sufficient descent property without the need for line search. Its performance was evaluated using a range of benchmark problems from existing literature, with global convergence proven under strong Wolfe line search conditions. The findings emphasize the competitive edge of the method compared to other similar approaches documented in the literature. The method was also applied to solve the inverse kinematics problem in robotic motion control for a 4DOF system, where it outperformed both SNCG and SSGM in several performance metrics. Future work should focus on developing more advanced accelerated CG algorithms that can effectively handle high-dimensional problems commonly encountered in scientific and engineering applications.

Author Contributions

Conceptualization, R.B.Y. and A.B.G.; methodology, R.B.Y. and N.Z.; software, S.M.I.; validation, N.Z., A.B.G. and S.M.I.; formal analysis, R.B.Y.; investigation, N.Z. and R.B.Y.; resources, N.Z.; writing—original draft preparation, R.B.Y. and A.B.G.; writing—review and editing, R.B.Y., A.B.G., N.Z. and S.M.I.; visualization, R.B.Y. and S.M.I.; supervision, A.B.G. and N.Z.; project administration, A.B.G.; funding acquisition, A.B.G. All authors have read and agreed to the published version of the manuscript.

Funding

This work was supported and funded by the Deanship of Scientific Research at Imam Mohammad Ibn Saud Islamic University (IMSIU) (grant number IMSIU-DDRSP22501).

Data Availability Statement

All the data can be found within the manuscript.

Acknowledgments

This work was supported and funded by the Deanship of Scientific Research at Imam Mohammad Ibn Saud Islamic University (IMSIU) (grant number IMSIU-DDRSP22501).

Conflicts of Interest

The authors declare no conflicts of interest.

Abbreviations

The following abbreviations are used in this manuscript:
NLSNonlinear least squares
CGConjugate gradient
SCGStructured conjugate gradient
DSCGADiagonally structured conjugate gradient with acceleration
SQNStructured Quasi-Newton
4DOFFour degrees of freedom
SDSteepest descent
GNGauss–Newton
LMLevenberg–Marquard
NMNewton’s method
QNQuasi-Newton
TRTrust-region
HSHestenes–Stiefel
PRPPolak–Ribiere–Polyak
LSLiu–Storey
BBBarzilai–Borwein
SNCGScaled conjugate gradient
SSGMStructured spectral gradient method

References

  1. Yunus, R.B.; El-Saeed, A.R.; Zainuddin, N.; Daud, H. A structured RMIL conjugate gradient-based strategy for nonlinear least squares with applications in image restoration problems. AIMS Math. 2025, 10, 14893–14916. [Google Scholar] [CrossRef]
  2. Yahaya, M.M.; Kumam, P.; Awwal, A.M.; Chaipunya, P.; Aji, S.; Salisu, S. A new generalized quasi-newton algorithm based on structured diagonal hessian approximation for solving nonlinear least-squares problems with application to 3dof planar robot arm manipulator. IEEE Access 2022, 10, 10816–10826. [Google Scholar] [CrossRef]
  3. Salihu, N.; Kumam, P.; Awwal, A.M.; Arzuka, I.; Seangwattana, T. A structured Fletcher-Revees spectral conjugate gradient method for unconstrained optimization with application in robotic model. In Operations Research Forum; Springer: Berlin/Heidelberg, Germany, 2023; Volume 4, p. 81. [Google Scholar]
  4. Henn, S. A Levenberg–Marquardt scheme for nonlinear image registration. BIT Numer. Math. 2003, 43, 743–759. [Google Scholar] [CrossRef]
  5. Chen, Z.; Shao, H.; Liu, P.; Li, G.; Rong, X. An efficient hybrid conjugate gradient method with an adaptive strategy and applications in image restoration problems. Appl. Numer. Math. 2024, 204, 362–379. [Google Scholar] [CrossRef]
  6. Diphofu, T.; Kaelo, P. A modified extended Fletcher–Reeves conjugate gradient method with an application in image restoration. Int. J. Comput. Math. 2025, 102, 830–845. [Google Scholar] [CrossRef]
  7. Wang, L.; Wu, H.; Luo, C.; Xie, Y. A novel preconditioned modified conjugate gradient method for vehicle–bridge moving force identification. In Structures; Elsevier: Amsterdam, The Netherlands, 2025; Volume 73, p. 108322. [Google Scholar]
  8. Ciaburro, G.; Iannace, G. Modeling acoustic metamaterials based on reused buttons using data fitting with neural network. J. Acoust. Soc. Am. 2021, 150, 51–63. [Google Scholar] [CrossRef]
  9. Yahaya, M.M.; Kumam, P.; Chaipunya, P.; Awwal, A.M.; Wang, L. On diagonally structured scheme for nonlinear least squares and data-fitting problems. RAIRO Oper. Res. 2024, 58, 2887–2905. [Google Scholar] [CrossRef]
  10. Passi, R.M. Use of nonlinear least squares in meteorological applications. J. Appl. Meteorol. (1962–1982) 1977, 16, 827–832. [Google Scholar] [CrossRef]
  11. Omesa, A.U.; Ibrahim, S.M.; Yunus, R.B.; Moghrabi, I.A.; Waziri, M.Y.; Sambas, A. A brief survey of line search methods for optimization problems. Results Control Optim. 2025, 19, 100550. [Google Scholar] [CrossRef]
  12. Ibrahim, S.M.; Muhammad, L.; Yunus, R.B.; Waziri, M.Y.; Kamaruddin, S.b.A.; Sambas, A.; Zainuddin, N.; Jameel, A.F. The global convergence of some self-scaling conjugate gradient methods for monotone nonlinear equations with application to 3DOF arm robot model. PLoS ONE 2025, 20, e0317318. [Google Scholar] [CrossRef]
  13. Wang, X.; Yuan, G. An accelerated descent CG algorithm with clustering the eigenvalues for large-scale nonconvex unconstrained optimization and its application in image restoration problems. J. Comput. Appl. Math. 2024, 437, 115454. [Google Scholar] [CrossRef]
  14. Liu, P.; Li, J.; Shao, H.; Shao, F.; Liu, M. An accelerated Dai–Yuan conjugate gradient projection method with the optimal choice. Eng. Optim. 2025, 1–29. [Google Scholar] [CrossRef]
  15. Wu, X.; Ye, X.; Han, D. A family of accelerated hybrid conjugate gradient method for unconstrained optimization and image restoration. J. Appl. Math. Comput. 2024, 70, 2677–2699. [Google Scholar] [CrossRef]
  16. Nosrati, M.; Amini, K. A new structured spectral conjugate gradient method for nonlinear least squares problems. Numer. Algorithms 2024, 97, 897–914. [Google Scholar] [CrossRef]
  17. Mo, Z.; Ouyang, C.; Pham, H.; Yuan, G. A stochastic recursive gradient algorithm with inertial extrapolation for non-convex problems and machine learning. Int. J. Mach. Learn. Cybern. 2025, 16, 4545–4559. [Google Scholar] [CrossRef]
  18. Mohammad, H.; Sulaiman, I.M.; Mamat, M. Two diagonal conjugate gradient like methods for unconstrained optimization. J. Ind. Manag. Optim. 2024, 20, 170–187. [Google Scholar] [CrossRef]
  19. Hestenes, M.; Stiefel, E. Methods of conjugate gradients for solving linear systems. J. Res. Natl. Inst. Stand. Technol. 1952, 49, 409–435. [Google Scholar] [CrossRef]
  20. Polak, B.; Ribiere, G. Note on the convergence of conjugate direction methods. Math. Model. Numer. Anal. 1969, 16, 35–43. [Google Scholar]
  21. Polyak, B.T. The conjugate gradient method in extreme problems. USSR Comput. Math. Math. Phys. 1969, 9, 94–112. [Google Scholar] [CrossRef]
  22. Liu, Y.; Storey, C. Efficient generalized conjugate gradient algorithms, part 1: Theory. J. Optim. Theory Appl. 1991, 69, 129–137. [Google Scholar] [CrossRef]
  23. Hager, W.; Zhang, H. A survey of nonlinear conjugate gradient methods. Pac. J. Optim. 2006, 2, 35–58. [Google Scholar]
  24. Sun, W.; Yuan, Y.X. Optimization Theory and Methods: Nonlinear Programming; Springer Science: Berlin/Heidelberg, Germany, 2006. [Google Scholar]
  25. Mohammad, H.; Waziri, M.Y.; Santos, S.A. A brief survey of methods for solving nonlinear least-squares problems. Numer. Algebra Control Optim. 2019, 9, 1–13. [Google Scholar] [CrossRef]
  26. Dennis, J.; Martinez, H.J.; Tapia, R.A. Convergence Theory for the Structured BFGS Secant Method with an Application to Nonlinear Least Squares. J. Optim. Theory Appl. 1989, 61, 161–178. [Google Scholar] [CrossRef]
  27. Kobayashi, M.; Narushima, Y.; Yabe, H. Nonlinear Conjugate Gradient Methods with Structured Secant Condition for Nonlinear Least Squares Problems. J. Comput. Appl. Math. 2010, 234, 375–397. [Google Scholar] [CrossRef]
  28. Hanke, M.; Nagy, J.G.; Vogel, C. Quasi-Newton approach to nonnegative image restorations. Linear Algebra Appl. 2000, 316, 223–236. [Google Scholar] [CrossRef]
  29. Hochbruck, M.; Hönig, M. On the convergence of a regularizing Levenberg–Marquardt scheme for nonlinear ill-posed problems. Numer. Math. 2010, 115, 71–79. [Google Scholar] [CrossRef]
  30. Pes, F.; Rodriguez, G. A doubly relaxed minimal-norm Gauss–Newton method for underdetermined nonlinear least-squares problems. Appl. Numer. Math. 2022, 171, 233–248. [Google Scholar] [CrossRef]
  31. Huynh, D.Q.; Hwang, F.N. An accelerated structured quasi-Newton method with a diagonal second-order Hessian approximation for nonlinear least squares problems. J. Comput. Appl. Math. 2024, 442, 115718. [Google Scholar] [CrossRef]
  32. Mohammad, H.; Santos, S.A. A Structured Diagonal Hessian Approximation Method with Evaluation Complexity Analysis for Nonlinear Least Squares. Comput. Appl. Math. 2018, 37, 6619–6653. [Google Scholar] [CrossRef]
  33. Dehghani, R.; Mahdavi-Amiri, R. Scaled nonlinear conjugate gradient methods for nonlinear least squares problems. Numer. Algorithms 2018, 82, 1–20. [Google Scholar] [CrossRef]
  34. Yunus, R.B.; Zainuddin, N.; Daud, H.; Kannan, R.; Karim, S.A.A.; Yahaya, M.M. A Modified Structured Spectral HS Method for Nonlinear Least Squares Problems and Applications in Robot Arm Control. Mathematics 2023, 11, 3215. [Google Scholar] [CrossRef]
  35. Yahaya, M.M.; Kumam, P.; Chaipunya, P.; Seangwattana, T. Structured Adaptive Spectral-Based Algorithms for Nonlinear Least Squares Problems with Robotic Arm Modelling Applications. Comput. Appl. Math. 2023, 42, 320. [Google Scholar] [CrossRef]
  36. Yunus, R.B.; Zainuddin, N.; Daud, H.; Kannan, R.; Yahaya, M.M.; Al-Yaari, A. An Improved Accelerated 3-Term Conjugate Gradient Algorithm with Second-Order Hessian Approximation for Nonlinear Least-Squares Optimization. J. Math. Comput. Sci. 2025, 36, 263–274. [Google Scholar] [CrossRef]
  37. Andrei, N. Accelerated conjugate gradient algorithm with finite difference Hessian/vector product approximation for unconstrained optimization. J. Comput. Appl. Math. 2009, 230, 570–582. [Google Scholar] [CrossRef]
  38. Zoutendijk, G. Nonlinear Programming, Computational Methods. In Integer and Nonlinear Programming; Abadie, J., Ed.; North-Holland: Amsterdam, The Netherlands, 1970. [Google Scholar]
  39. Muhammad, H.; Waziri, M.Y. Structured two-point step size gradient methods for nonlinear least squares. J. Optim. Theory Appl. 2019, 181, 298–317. [Google Scholar] [CrossRef]
  40. Cruz, W.L.; Martínez, J.; Raydan, M. Spectral residual method without gradient information for solving large-scale nonlinear systems of equations. Math. Comput. 2006, 75, 1429–1448. [Google Scholar] [CrossRef]
  41. Moré, J.J.; Garbow, B.S.; Hillstrom, K.E. Testing unconstrained optimization software. ACM Trans. Math. Softw. 1981, 7, 17–41. [Google Scholar] [CrossRef]
  42. Liu, J.; Li, S. A projection method for convex constrained monotone nonlinear equations with applications. Comput. Math. Appl. 2015, 70, 2442–2453. [Google Scholar] [CrossRef]
  43. Lukšan, L.; Vlček, J. Test Problems for Unconstrained Optimization; Technical Report 897; Institute of Computer Science Academy of Sciences of the Czech Republic: Praha, Czech Republic, 2003. [Google Scholar]
  44. Jamil, M.; Yang, X.S. A literature survey of benchmark functions for global optimisation problems. Int. J. Math. Model. Numer. Optim. 2013, 4, 150–194. [Google Scholar] [CrossRef]
  45. Dolan, E.D.; Moré, J.J. Benchmarking optimization software with performance profiles. Math. Program. 2002, 91, 201–213. [Google Scholar] [CrossRef]
  46. Yunus, R.B.; Zainuddin, N.; Daud, H.; Kannan, R.; Yahaya, M.M.; Karim, S.A.A. New CG-Based Algorithms With Second-Order Curvature Information for NLS Problems and a 4DOF Arm Robot Model. IEEE Access 2024, 12, 61086–61103. [Google Scholar] [CrossRef]
Figure 1. Methods for solving NLS.
Figure 1. Methods for solving NLS.
Mathematics 13 02766 g001
Figure 2. Performance profiles according to the number of iterations.
Figure 2. Performance profiles according to the number of iterations.
Mathematics 13 02766 g002
Figure 3. Performance profiles according to function evaluation.
Figure 3. Performance profiles according to function evaluation.
Mathematics 13 02766 g003
Figure 4. Performance profiles according to gradient evaluation.
Figure 4. Performance profiles according to gradient evaluation.
Mathematics 13 02766 g004
Figure 5. Performance profiles according to CPU time.
Figure 5. Performance profiles according to CPU time.
Mathematics 13 02766 g005
Figure 6. Robot trajectories generated per the Lissajous curve’s (a) end-effector trajectory and (b) the Lissajous curve’s planned trajectory.
Figure 6. Robot trajectories generated per the Lissajous curve’s (a) end-effector trajectory and (b) the Lissajous curve’s planned trajectory.
Mathematics 13 02766 g006
Figure 7. The residual error of the Lissajous curve is observed along the (a) x axis and (b) the y axis.
Figure 7. The residual error of the Lissajous curve is observed along the (a) x axis and (b) the y axis.
Mathematics 13 02766 g007
Figure 8. Robot trajectories generated per the Lissajous curve’s (a) end-effector trajectory and (b) the Lissajous curve’s planned trajectory.
Figure 8. Robot trajectories generated per the Lissajous curve’s (a) end-effector trajectory and (b) the Lissajous curve’s planned trajectory.
Mathematics 13 02766 g008
Figure 9. The residual error of the Lissajous curve is observed along the (a) x axis and (b) the y axis.
Figure 9. The residual error of the Lissajous curve is observed along the (a) x axis and (b) the y axis.
Mathematics 13 02766 g009
Table 1. Benchmark problems sourced from [40,41,42,43,44].
Table 1. Benchmark problems sourced from [40,41,42,43,44].
No.FUNCTIONNo.FUNCTION
1.PENALTY FUNCTION 113.EXPONENTIAL FUNCTION 2
2.VARIABLY DIMENSIONED14.SINGULAR FUNCTION 2
3.TRIGONOMETRIC FUNCTION15.EXT. FREUDENSTEIN AND ROTH
4.DISCRETE BOUNDARY-VALUE16.EXT. POWELL SINGULAR FUNCTION
5.LINEAR FULL RANK17.FUNCTION 21
6.PROBLEM 20218.BROYDEN TRIDIAGONAL FUNCTION
7.PROBLEM 20619.EXTENDED HIMMELBLAU
8.PROBLEM 21220.FUNCTION 27
9.RAYDAN 121.TRILOG FUNCTION
10.RAYDAN 222.ZERO JACOBIAN FUNCTION
11.SINE FUNCTION 223.EXPONENTIAL FUNCTION
12.EXPONENTIAL FUNCTION 124.FUNCTION 18
25.BROWN ALMOST FUNCTION
Table 2. Numerical results from problems 1–14, the symbol (⋆) denotes failure.
Table 2. Numerical results from problems 1–14, the symbol (⋆) denotes failure.
METHODS  SNCG   SSGM   DSCGA  
FUNCSDIMItFeGeTIMEItFeGeTIMEItFeGeTIME
1.300047100.17623580.01832570.2012
 600047100.06743580.01952570.0798
 900047100.04823580.02373570.0630
 12,00047100.04133580.01973570.0315
 15,00047100.03243580.02793570.0956
2.30001381532150.788241618490.2485
 60001331852501.00242532760.7239
 90001542612632.81267071710.0698
 12,0001773213233.97503295970.2875
 15,0001833873505.22802249671.6995
3.3000504461510.4861711142140.654545471360.4324
 6000605781811.2966821462472.354135521061.0266
 9000482531450.8607971752924.011373942202.623
 12,000667651994.14571061843194.630667902022.8439
 15,000844542534.1474981852955.907861851842.1456
4.3000525220.1225523400.09232670.0667
 6000735160.0753735250.08962670.1102
 9000113740.037796140.02563880.1157
 12,000153940.03491135250.0954411160.0555
 15,000204640.07111359400.1001526180.0598
5.30002570.05142570.00732570.0139
 60002570.03852570.00922570.0175
 90002570.01652570.01332570.0428
 12,0002570.01762570.01912570.0425
 15,0002570.25772570.02512570.0817
6.300049130.00804513160.0126511160.0275
 600049130.02215513160.040159511160.0439
 900049130.0261513160.0329511160.0509
 12,00049130.0331513160.0427511160.1295
 15,00049130.0484513160.0593511160.1678
7.3000613190.0222616190.2530512160.0760
 6000613190.1060616190.5337512160.0500
 9000613190.0659616190.6334512160.0667
 12,000613190.0603616190.6633512160.1283
 15,000613190.1582616190.0774512160.1472
8.30001021310.0577711220.115549130.0431
 60001021310.0898711220.123449130.0774
 90001021310.22631711220.243549130.2239
 12,0001021310.2282711220.235549130.1138
 15,0001021310.2192711220.325949130.3342
9.300055160.0641
 600055160.2054
 9000718220.0152
 12,000718220.0199
 15,000718220.0357
10.3000511160.040449130.009549130.0161
 6000511160.022449130.019149130.0312
 9000511160.032849130.021449130.0465
 12,000511160.050549130.031749130.0914
 15,000511160.049449130.046549130.0957
11.30002340.03321340.01731340.0146
 60002340.02591340.01191340.0407
 90002340.02511340.01871340.0494
 12,0002340.02471340.02471340.0094
 15,0002340.02551340.04081340.0249
12.30004780.0563713170.503
 600051090.0192614160.0115
 900051090.02801325340.3499
 12,000712130.03321635460.4575
 15,000815140.06801838550.0261
13.3000834442500.29251334400.3015
 6000673572020.37421531460.6974
 9000794212380.55034643490.8374
 12,000372591120.09352137640.3054
 15,000613271840.37675064510.3524
14.3000512160.0465717220.1774512160.0259
 6000512160.0272717220.3432512160.0509
 9000512160.0343717220.4903512160.0657
 12,000512160.0409717220.0525512160.0755
 15,000512160.1378717220.7693512160.1830
Table 3. Numerical results from problems 15–25, the symbol (⋆) denotes failure.
Table 3. Numerical results from problems 15–25, the symbol (⋆) denotes failure.
METHODS  SNCG   SSGM   DSCGA  
FUNCSDIMItFeGeTIMEItFeGeTIMEItFeGeTIME
15.30002669790.06371017310.0905
 60002669790.36391017310.2584
 90002669790.22771023310.4727
 12,0002669790.25711334400.6318
 15,0002669790.34891438410.4181
16.300019149580.1235518160.25241012130.0617
 600019149580.1494518160.63221012130.3435
 900019149580.2219518160.48661012130.1647
 12,00019149580.4396518160.54011012130.2115
 15,00019149580.4359518160.62611012130.4103
17.3000674322020.5096592761780.4913664281990.8889
 6000674322020.9914592761780.8029664281991.2145
 9000674322021.9478592761780.89849664281991.8835
 12,000674322021.8094592761781.4494664281992.3523
 15,000674322022.0496592761781.4754664281992.5114
18.3000231011092.4467
 6000392451162.8094
 9000402561172.8123
 12,000412551172.8332
 15,000503311522.6772
19.3000523351570.31743673572020.243617114520.14464
 6000523351570.2549834442500.451417114520.4019
 9000523351570.4769854522560.871817114520.2591
 12,000523351570.4952763992290.684017114520.3423
 15,000523351570.9069894732680.981617114520.3721
20.300031242940.1337613271840.2679867240.0534
 600032246970.2033342461030.2648868250.1076
 900032244970.4123433001300.5784868250.1785
 12,00026216790.4675423471270.9517868250.2726
 15,00031229940.8068342181030.5337868250.2123
21.3000512160.0460717220.2688512160.0427
 6000512160.0338717220.4862512160.2084
 9000512160.0505717220.0599512160.1205
 12,000512160.0634717220.1868512160.0967
 15,000512160.0723717220.1079512160.1268
22.3000342411030.1115402661210.1011949280.0387
 600032227970.1781422831270.2281743220.0766
 900029222880.3336392821180.6486644190.0174
 12,00024171730.5712412731240.71591166340.3608
 15,00029230880.5215423051270.6488956280.3549
23.300026120790.2191622233840.648028272850.3291
 60001954580.1244622233840.648113105400.2024
 90002058610.157921584984751.406918173550.4258
 12,00025168760.4151523101570.800414103430.5185
 15,00026231790.6274463851391.14421481430.3717
24.30002330.03801220.03601110.0042
 60002330.00311220.02491110.0024
 90002330.00341220.03911110.0033
 12,0002330.00371220.04851110.0055
 15,0002330.01931220.04861110.0075
25.30001729370.03002179640.05101321220.0276
 60001731390.03912386700.17591322220.0530
 90001732390.0582122387700.20891424220.0811
 12,0001833400.06982491730.48861527220.0381
 15,0001933400.12732388700.36941729290.4467
Table 4. The performance of the DSCGA algorithm compared to SSGM and SNCG algorithms on Problem 26.
Table 4. The performance of the DSCGA algorithm compared to SSGM and SNCG algorithms on Problem 26.
MethodsDSCGASNCGSSGM
 NITime (s)ReNITime (s)ReNITime (s)Re
Problem 26520.299 10 6 770.824 10 5 810.496 10 5
Table 5. The performance of the DSCGA algorithm compared to SSGM and SNCG algorithms on Problem 27.
Table 5. The performance of the DSCGA algorithm compared to SSGM and SNCG algorithms on Problem 27.
MethodsDSCGASNCGSSGM
 NITime (s)ReNiTime (s)ReNITime (s)Re
Problem 27580.433 10 6 760.469 10 4 890.604 10 5
Disclaimer/Publisher’s Note: The statements, opinions and data contained in all publications are solely those of the individual author(s) and contributor(s) and not of MDPI and/or the editor(s). MDPI and/or the editor(s) disclaim responsibility for any injury to people or property resulting from any ideas, methods, instructions or products referred to in the content.

Share and Cite

MDPI and ACS Style

Yunus, R.B.; Ben Ghorbal, A.; Zainuddin, N.; Ibrahim, S.M. An Accelerated Diagonally Structured CG Algorithm for Nonlinear Least Squares and Inverse Kinematics. Mathematics 2025, 13, 2766. https://doi.org/10.3390/math13172766

AMA Style

Yunus RB, Ben Ghorbal A, Zainuddin N, Ibrahim SM. An Accelerated Diagonally Structured CG Algorithm for Nonlinear Least Squares and Inverse Kinematics. Mathematics. 2025; 13(17):2766. https://doi.org/10.3390/math13172766

Chicago/Turabian Style

Yunus, Rabiu Bashir, Anis Ben Ghorbal, Nooraini Zainuddin, and Sulaiman Mohammed Ibrahim. 2025. "An Accelerated Diagonally Structured CG Algorithm for Nonlinear Least Squares and Inverse Kinematics" Mathematics 13, no. 17: 2766. https://doi.org/10.3390/math13172766

APA Style

Yunus, R. B., Ben Ghorbal, A., Zainuddin, N., & Ibrahim, S. M. (2025). An Accelerated Diagonally Structured CG Algorithm for Nonlinear Least Squares and Inverse Kinematics. Mathematics, 13(17), 2766. https://doi.org/10.3390/math13172766

Note that from the first issue of 2016, this journal uses article numbers instead of page numbers. See further details here.

Article Metrics

Back to TopTop