Next Article in Journal
Reliability Estimation in Stress Strength for Generalized Rayleigh Distribution Using a Lower Record Ranked Set Sampling Scheme
Previous Article in Journal
Multi-View and Multimodal Graph Convolutional Neural Network for Autism Spectrum Disorder Diagnosis
 
 
Font Type:
Arial Georgia Verdana
Font Size:
Aa Aa Aa
Line Spacing:
Column Width:
Background:
Article

A Modified Three-Term Conjugate Descent Derivative-Free Method for Constrained Nonlinear Monotone Equations and Signal Reconstruction Problems

1
Department of Science, School of Continuing Education, Bayero University, BUK, Kano PMB 3011, Nigeria
2
Department of Mathematical Sciences, Faculty of Science, Abubakar Tafawa Balewa University, Bauchi PMB 0248, Nigeria
3
Numerical Optimization Research Group, Department of Mathematical Sciences, Faculty of Physical Sciences, Bayero University, Kano 700241, Nigeria
4
Department of Mathematics and Applied Mathematics, Sefako Makgatho Health Sciences University, Ga-Rankuwa, Pretoria 0204, South Africa
*
Author to whom correspondence should be addressed.
Mathematics 2024, 12(11), 1649; https://doi.org/10.3390/math12111649
Submission received: 28 April 2024 / Revised: 17 May 2024 / Accepted: 19 May 2024 / Published: 24 May 2024

Abstract

:
Iterative methods for solving constraint nonlinear monotone equations have been developed and improved by many researchers. The aim of this research is to present a modified three-term conjugate descent (TTCD) derivative-free method for constrained nonlinear monotone equations. The proposed algorithm requires low storage memory; therefore, it has the capability to solve large-scale nonlinear equations. The algorithm generates a descent and bounded search direction d k at every iteration independent of the line search. The method is shown to be globally convergent under monotonicity and Lipschitz continuity conditions. Numerical results show that the suggested method can serve as an alternative to find the approximate solutions of nonlinear monotone equations. Furthermore, the method is promising for the reconstruction of sparse signal problems.

1. Introduction

Consider the constrained nonlinear monotone equations of the following type:
P ( x ) = 0 , x G ,
G R n is a closed, non-empty and convex set. P : R n R n is monotone—that is,
( P ( x ) P ( y ) ) T ( x y ) 0 , for all x , y R n .
Problem (1) has drawn a lot of attention due to its applicability in many areas, such as power flow equations [1,2], financial forecasting problems [3], generalized proximal algorithms with Bregman distances [4], and economic equilibrium problems [5,6]. Furthermore, monotone variational inequality can be transformed into monotone nonlinear equations [7,8]. Numerous approaches such as Newton’s method, quasi-Newton’s method and many others—including the Levenberg–Marquardt method—are used to solve (1) [9,10,11] because of their fast convergence rate (see [12,13,14,15] for details). However, they require computing and storing of the inverse Jacobian matrix at every iteration, which is not always easy to compute or not available due to singularity. As such, many researchers have suggested derivative-free approaches that do not require computation of the Jacobian matrix; among others are [16,17,18,19,20,21,22,23,24,25]. A modified three-term conjugate descent (CD) derivative-free method is of interest in this research. The classical CD method was first presented by Fletcher [26], particularly for solving problems involving unconstrained optimization. Yan et al. [27] presented a globally convergent derivative-free method for solving large-scale nonlinear monotone equations with the projection method proposed in [28]. The numerical experiment reported illustrated that the methods are efficient. In [29], Koorapetse described a new three-term derivative-free method based on a projection approach to solve large-scale nonlinear monotone equations. Besides that, the search direction satisfies the sufficient descent condition and the method proved to be globally convergent. Abubakar et al. proposed a three-term derivative-free method in [30], where the Dai–Yuan type method, the Hestenes–Stiefel method, the Fletcher–Reeves method and the Polak–Ribiére–Polyak method were described as special cases. Jie and Zhong [31] modified the HS method for an unconstrained optimization problem. The global convergence of the algorithm was proved using the standard Wolfe line search. An interesting part of their research is that the search directions are always sufficiently descent independent of any line search and satisfy the conjugacy property as well. Motivated by the work of Jie and Zhong [31] and the fact that three-term derivative-free methods based on modified CD methods are rare in the literature, this paper considers a modified three-term, CD-type, derivative-free method to solve constrained nonlinear monotone equations and its application to sparse signal problems. The direction is sufficiently descent and bounded independent of the line search. The global convergence of the method is obtained under monotonicity and Lipschitz continuity conditions. Furthermore, a numerical experiment is provided to show the efficiency of the method by comparing with existing methods. The breakdown of the remaining parts of this article is as follows: The proposed algorithm and its derivation are given in Section 2. Section 3 provides the theoretical results. Section 4 presents the numerical experiments for the solution of nonlinear monotone equations. Sparse signal reconstruction is detailed in Section 5. Conclusions are made in Section 6. The norm of vectors is denoted by · and P k : = P ( x k ) . In addition, H G [ · ] is the projection mapping from R n onto G given by H G [ x ] = arg min { x z : x R n , z G } for a non-empty, convex and closed set G R n .

2. Algorithm and Its Derivation

In this part, we presented the framework of the algorithm and its derivation using a similar idea to that described in [31]. The proposed descent search direction is given by
d k = P k i f k = 0 P k P k T ( P k + d k 1 ) d k 1 λ k + P k 2 d k 1 λ k , i f k 1 ,
where
λ k = d k 1 T P ^ k 1 ,
P ^ k 1 = P k 1 + t k d k 1 ,
t k = 1 + max 0 , d k 1 T P k 1 d k 1 2 .
Since the idea is obtained from an unconstrained optimization problem [31], to extend it to a system of constrained nonlinear monotone equations the following modification is needed.
Remark 1.
It is well known from the definition of  λ k  that
λ k =   d k 1 T P ^ k 1 , =   d k 1 T P k 1 + t k d k 1 =   d k 1 T P k 1 + t k d k 1 2 =   d k 1 T P k 1 + 1 + max 0 , d k 1 T P k 1 d k 1 2 d k 1 2   d k 1 T P k 1 + d k 1 2 d k 1 T P k 1   d k 1 2 > 0 .
Therefore, λ k > 0 whenever d k 1 0 .
Thus, λ k is well defined.
The novel search direction d k always satisfies the sufficient descent property given by
P k T d k P k 2 .

3. Theoretical Results

This section presents the global convergence of the TTCD algorithm. However, before proceeding, the following conditions need to be provided.
Condition 1.
G R n  is a closed, non-empty and convex set.
Condition 2.
P is L-Lipschitz continuous and monotone on  R n , i.e.,
( P ( x 1 ) P ( x 2 ) ) T ( x 1 x 2 ) 0 , f o r a l l x 1 , x 2 R n
and
P ( x 1 ) P ( x 2 )   L x 1 x 2 , L > 0 .
Lemma 1
([30]). If Conditions 1 and 2 hold, then  { x k }  and  { Γ k }  are bounded. Furthermore,
lim k v k d k = 0 .
Remark 2.
Lemma 1 implies that there exists a non-negative σ such that  x k   σ . Since  { P k }  is continuous and  { x k }  is bounded, then  { P k }  is also bounded. This implies that there is a positive constant M such that  P k   M .
Lemma 2.
The search direction given by (3) is sufficiently descent.
Proof. 
When k = 0 , then P 0 T d 0 P 0 2 . If k 1 , then by (3) and (7), we have
P k T d k P k 2 P k T ( P k + d k 1 ) P k T d k 1 λ k + P k 2 P k T d k 1 λ k , P k 2 P k 2 P k T d k 1 λ k P k T d k 1 2 λ k + P k 2 P k T d k 1 λ k , P k 2 P k T d k 1 2 λ k , P k 2 .
This implies that the search direction obtained from (3) satisfies (7).    □
Lemma 3.
Assume that Conditions 1 and 2 hold. Let  { ζ k } ,  { x k }  and  { d k }  be defined by (28), (27) and (3), respectively; then, the following apply:
i. 
For all k, there exists a step-length  v k = μ ρ i , satisfying (26), for some
i N { 0 } ,  k 0 .
ii. 
The step-length  v k  satisfies the following relation:
v k > min μ , ρ P k 2 ( L + t ) d k 2 .
Proof. 
i. Assume to the contrary that there is a positive integer k 0 0  such that (26) is not true for any integer i 0 —that is,
P ( x k 0 + μ ρ i d k 0 ) T d k 0 < t μ ρ i d k 0 2 .
From the continuity of P and allowing i , we have
P ( x k 0 ) T d k 0 0 .
However, from (7), we have
P ( x k 0 ) T d k 0 P ( x k 0 ) 2 > 0 ,
    which is a contradiction to (10). Hence, there exists k 0 0 for which v k satisfies (26).
ii.
If v k μ , then v k = v k ρ  does not satisfy (26), which means
P ( x k + v k d k ) T d k < q v k d k 2 .
By (7), we have
P k 2 = P k T d k = ( P ( x k + v k d k ) P k ) T d k P ( x k + v k d k ) T d k .
Substituting (11) into (12), by Lipschitz continuity of P and the Cauchy–Schwarz inequality, we have
P k 2 <   ( P ( x k + v k d k ) P k ) d k   +   q v k d k 2 <   L v k d k 2   +   q v k d k 2 =   L + q v k d k 2 .
Substituting v k = v k / ρ in (13), we have
v k > ρ P k 2 L + q d k 2 .
Therefore, (9) is well established.
Theorem 1.
If Conditions 1 and 2 hold and  { x k }  is a sequence obtained from (27), then
lim inf k P k = 0 .
However, the sequence { x k } converges to (1).
Proof. 
Suppose by contradiction that
lim inf k P k 0 .
Then, there exists m 1 > 0 such that for all k 0 ,
P k     m 1 .
By Cauchy–Schwarz inequality, from (7) and (15) for all k 0 ,
d k     P k   m 1 .
By Lemma (1), for all  k 0 , { x k } and { P k } are bounded. We want to prove that { d k } given by (3) is bounded. For k = 0 , we obtain
d 0   =   P 0   M .
By (7), we can see that
P 0 T d 0 P 0 2 .
From (3), if k 1 , using Cauchy–Schwarz inequality and triangular inequality, we obtain
d k =   P k P k T ( P k + d k 1 ) d k 1 λ k + P k 2 d k 1 λ k ,   P k   +   P k 2 d k 1 d k 1 2 + | P k T d k 1 | d k 1 d k 1 2 + P k 2 d k 1 d k 1 2 ,   P k   +   P k 2 d k 1 + P k + P k 2 d k 1 ,   2 P k   +   2 P k 2 d k 1 .
From Lemma (1), inequality (16) and Lipschitz continuity of P, relation (18) reduces to
d k 2 M + 2 M 2 m 1 , 2 M 1 + M m 1 .
Let M * = 2 M 1 + M m 1 ; then, we have
d k     M * , w h e r e M * i s a p o s i t i v e c o n s t a n t .
Multiply both sides (9) by d k to obtain
v k d k > min μ d k , ρ P k 2 ( L + t ) d k min μ m 1 , m 1 2 ρ ( L + t ) M * > 0 .
Clearly, relation (21) above contradicts (8); therefore,
lim inf k P k = 0 .
Since P is a continuous function and (14) holds, then { x k } has an accumulation point x ¯ at which P ( x ¯ ) = 0 ; this implies that x ¯ is a solution to (1). Since x ¯ is an accumulation point for { x k } , then by Lemma (1), { x k x ¯ } converges. Therefore, it can be deduced that { x k } converges to x ¯ .    □

4. Results for the Experiments

In this section, we present the results for the numerical experiments. We show the efficiency and effectiveness of the TTCD algorithm when it comes to solving nonlinear monotone equations by comparing with some existing methods using benchmark test problems. The experiments were run on a 4 GB RAM PC with a 2.13 GHz CPU. The following initial points and dimensions are considered:
  • Five different initial points: x 1 = ( 0.1 , 0.1 , , 0.1 ) T , x 2 = ( 0.2 , 0.2 , , 0.2 ) T , x 3 = ( 0.5 , 0.5 , , 0.5 ) T , x 4 = ( 1.5 , 1.5 , 1.5 ) T , x 5 = ( 2 , 2 , , 2 ) T .
  • Five different dimensions: 1000 ; 5000 ; 10,000 ; 50,000 ; 100,000 .
  • Eight problems (see Table 1 for details).
We choose the following as control parameters: q = 0.0001 , ρ = 0.8 , μ = 1 , θ = 1.2 .
The proposed algorithm is compared with MHS established by Yan et al. [27], MCDPM by Aji et al. [32] and PRPFR proposed by Yuan et al. [33]. P k     10 5 is the stopping criteria for the iterations and ( ) stands for the failure of the method to solve the problem. The comparison of all the algorithms are based on the number of iteration (NOI), number of function evaluation (FEV) and CPU time (TIME).
Table 1. The test problems with their references are listed in the Table below.
Table 1. The test problems with their references are listed in the Table below.
S/NProblem & Reference
1Modified exponential function 2 [34]
2Logarithmic function [34]
3Problem 1 in [14]
4Strictly convex function I [34]
5Strictly convex function II [34]
6Tridiagonal exponential function [35]
7Nonsmooth function [36]
8Problem 4 in [37]
The results for the experiments on the problems 1–8 in Table 1 are given in Table 2, Table 3, Table 4, Table 5, Table 6, Table 7, Table 8 and Table 9, respectively.
We plotted Figure 1, Figure 2 and Figure 3 using the performance profiles proposed in [38]. The TTCD algorithm has the least NOI in almost 82% of the problems compared to PRPFR, MHS and MCDPM, which have 20 % , 18 % and 30 % , respectively, as shown in Figure 1. Moreover, in Figure 2, the TTCD algorithm also has the highest FEV in approximately 67 % of the problems against PRPFR, MHS and MCDPM with 19 % , 18 % and 21 % , respectively. Figure 3 shows that the TTCD algorithm has the lowest time to converge to the solution of (1) in 60 % of the problems compared to PRPFR, MHS and MCDPM with approximately 5 % , 14 % and 24 % , respectively. In reference to the results mentioned above, we can say that the TTCD algorithm is more effective and robust than MCDPM, MHS and PRPFR. Therefore, it can serve as an alternative to those.

5. Signal Reconstruction

The aim of this section is to show how the sparse signal is reconstructed effectively by our proposed algorithm. A highly important aspect of many researchers is to obtain an ill-conditioned linear system of equations with sparse solutions. Some of them are objective functions, which can be transformed into minimization problems by employing a sparse ( 1 ) regularization term and a quadratic ( 2 ) error term [18], i.e.,
min x 1 2 y V x 2 2 + ω x 1 ,
where x R n , ω > 0 , y R k is an observation and V R k × n ( k < < n ) is a linear transformation. Problem (22) is an unconstrained optimization problem, the solution of which will lead to the exact recovery of a sparse signal. Various approaches have been applied for solving (22), see [18,39,40,41,42,43] for more details. Furthermore, the gradient projection method (GPA) introduced by Figueiredo et al. [41] is well known to many researchers. The following is how the method was generated: Given x R n , x is given by
x = b h , b 0 , h 0 ,
where b i = ( x i ) + , h i = ( x i ) + for all i = 1 , 2 , , n , and ( · ) + = max { 0 , · } . Using
x 1 = e n T b + e n T h , where e n = ( 1 , 1 , , 1 ) R n , problem (22) can be expressed as
min b , h 1 2 y V ( b h ) 2 2 + ω e n T b + ω e n T h , b 0 , h 0 ,
Equation (23) is a bound-quadratic constrained problem and can be written as
min z 1 2 z T D z + c T z , such that z 0 ,
where z = b h ,     c = ω e 2 n + u 1 u 1 ,     u 1 = V T y and     D = V T V V T V V T V V T V . Since (24) is a convex quadratic problem, a positive, semi-definite matrix of D exists. By Xiao et al. [44], (24) can be transformed to
F ( z ) = min { z , D z + c } = 0 ,
where the function F is a vector-valued function and the “min” is interpreted as a component-wise minimum. It was proved in References [44,45] that F(z) is continuous and monotone. Therefore, problem (22) can be converted into problem (1) and, thus, Algorithm 1 (TTCD) can be applied to solve it.
Algorithm 1: A modified three-term conjugate descent (TTCD) derivative-free algorithm
Input. Choose  x 0 R n , θ , ρ , μ , q > 0 , Set k = 0 .
Step 1. If P ( x k ) ε then stop. Else, go to Step 2.
Step 2. Compute d k using (3)–(6).
Step 3. Compute the step-length  v k = μ ρ i , where i = 0 , 1 , · · · , is the least non-negative integer satisfying the following inequality:
P ( x k + v k d k ) T d k q v k d k 2 .
Step 4. Let Γ k = x k + v k d k . If Γ k G and P ( Γ k )     ε , terminate. Else, compute
x k + 1 : = H G x k θ ζ k P ( Γ k ) ,
    where
ζ k : = P ( Γ k ) T ( x k Γ k ) P ( Γ k ) 2 .
Step 5. Set k = k + 1 , repeat from Step 1.
The TTCD method was compared with the PCG method by Liu et al. [46], and by Algorithms 2.1a and 2.1b presented in [47]. The major aim for the experiments is to reconstruct a sparse signal of length n from k observations. The successful recovered signal is obtained by the mean squared error (MSE). The MSE is defined by
M S E : = 1 n x j x * 2 .
The original and the recovered signals are, respectively, x j and x * . The experiment consists of elements with size 2 7 ,   n = 2 11 and k = 2 9 chosen at random. The noise is distributed with a variance of 10 3 and MSE of 0. However, the matrix V in (22) is generated in MATLAB with a randn ( k , n ) tool. We defined the objective function by
f ( x ) = 1 2 V x y 2 2 + ω x 1 .
The initial point and continuation approach run by each method are the same on parameter ω , as proposed in [46], to compare both methods. The starting point given by x 0 = V T y is used to implement all algorithms, where
| f i f i 1 | | f i 1 | < 10 5 , i = 1 , 2 , 3 ,
is the stopping condition and f i is the function value at x i .
From Figure 4, one can see that the TTCD algorithm was able to recover the noisy signal with 131 iterations in 2.31 s, PCG with 150 iterations in 2.72 s, Algorithm 2.1a with 176 iterations in 3.11 s and Algorithm 2.1b with 136 iterations in 2.41 s. However, Algorithm 2.1a has the least MSE, followed by TTCD, Algorithm 2.1b and PCG, respectively.
The results reported in Figure 4 can be found in Table 10, serial number 1.
Moreover, Figure 5 Shows the convergence properties of TTCD, PCG, Algorithm 2.1a and Algorithm 2.1b, respectively.
To ascertain the efficiency and the performance of the TTCD algorithm, the experiment was repeated up to 10 times by each algorithm and the average taken as shown in Table 10. On average, the TTCD recovered the noisy signal with the least number of iterations and CPU time. Furthermore, TTCD algorithm has the least MSE on average, followed by Algorithms 2.1a, Algorithms 2.1b and PCG, respectively. Hence, we can see TTCD as a better alternative to Algorithms 2.1a, Algorithms 2.1b and PCG, respectively, for solving signal reconstruction problems.

6. Conclusions

This research extended the unconstrained optimization problems to constrained nonlinear monotone equations by introducing a modified three-term CD derivative-free method using the projection method. The major advantage of the proposed method is that it does not require computation of the Jacobian matrix at each iteration or any linear equations; therefore, it is capable of solving large-scale constrained nonlinear monotone equations. Independent of the line search, the algorithm generates a bounded descent search direction. Under appropriate conditions, the method is shown to be globally convergent. Furthermore, the novel method is numerically robust and effective. Finally, the capacity of the method is tested to solve nonlinear equations equivalent to the 1 -norm regularized minimization problems. The results obtained from the signal reconstruction experiments indicate that TTCD is faster than Algorithm 2.1a, Algorithm 2.1b and PCG, respectively. In addition, on average, TTCD has the least MSE compared to Algorithm 2.1a, Algorithm 2.1b and PCG, respectively.

Author Contributions

Conceptualization, A.Y.; Methodology, A.Y.; Software, A.Y.; Validation, N.H.M. and M.A.; Formal analysis, A.Y. and N.H.M.; Investigation, N.H.M. and M.A.; Resources, M.A.; Data curation, A.Y.; Writing—original draft, A.Y.; Writing—review & editing, N.H.M.; Visualization, M.A.; Supervision, N.H.M.; Project administration, N.H.M.; Funding acquisition, M.A. All authors contributed equally. All authors have read and agreed to the published version of the manuscript.

Funding

The first author acknowledges the financial support of the tertiary education trust fund (Tetfund) of Nigeria.

Informed Consent Statement

Not applicable.

Data Availability Statement

Data are contained within the article.

Conflicts of Interest

The authors declare that they have no conflicts of interest.

References

  1. Wang, C.; Wang, Y. A superlinearly convergent projection method for constrained systems of nonlinear equations. J. Glob. Optim. 2009, 44, 283–296. [Google Scholar] [CrossRef]
  2. Chen, H.; Wang, Y.; Zhao, H. Finite convergence of a projected proximal point algorithm for the generalized variational inequalities. Oper. Res. Lett. 2012, 44, 303–305. [Google Scholar] [CrossRef]
  3. Dai, Z.; Dong, X.; Kang, J.; Hong, L. Forecasting stock market returns: New technical indicators and two-step economic constraint method. N. Am. J. Econ. Financ. 2020, 53, 101216. [Google Scholar] [CrossRef]
  4. Iusem, N.; Solodov, V. Newton-type methods with generalized distances for constrained optimization. Optimization 1997, 41, 257–278. [Google Scholar] [CrossRef]
  5. Dirkse, S.; Ferris, M. A collection of nonlinear mixed complementarity problems. Optim. Methods Softw. 1995, 5, 319–345. [Google Scholar] [CrossRef]
  6. Wang, Y.; Qi, L.; Luo, S.; Xu, Y. An alternative steepest direction method for the optimization in evaluating geometric discord. Pac. J. Optim. 2014, 10, 137–149. [Google Scholar]
  7. Fukushima, M. Equivalent differentiable optimization problems and descent methods for asymmetric variational inequality problems. Math. Program. 1992, 53, 99–110. [Google Scholar] [CrossRef]
  8. Zhao, Y.; Li, D. Monotonicity of fixed point and normal mappings associated with variational inequality and its application. SIAM J. Optim. 2001, 11, 962–973. [Google Scholar] [CrossRef]
  9. Dennis, J.E.; More, J.J. A characterization of superlinear convergence and its application to quasi-Newton methods. Math. Comput. 1974, 28, 549–560. [Google Scholar] [CrossRef]
  10. Dennis, E.J.J.; Moré, J.J. Quasi-Newton methods, motivation and theory. SIAM Rev. 1977, 19, 46–89. [Google Scholar] [CrossRef]
  11. Ioannis, K.A. On a nonsmooth version of Newton’s method using locally lipschitzian operators. Rend. Del Circ. Mat. Palermo 2007, 56, 5–16. [Google Scholar]
  12. Guanglu, Z.; Chuan, T.K. Superlinear convergence of a Newton-type algorithm for monotone equations. J. Optim. Theory Appl. 2005, 125, 205–221. [Google Scholar]
  13. Mohammad, H.; Waziri, M.Y. On Broyden-like update via some quadratures for solving nonlinear systems of equations. Turk. J. Math. 2015, 39, 335–345. [Google Scholar] [CrossRef]
  14. Zhou, W.J.; Li, H.D. A globally convergent BFGS method for nonlinear monotone equations without any merit functions. Math. Comput. 2008, 77, 2231–2240. [Google Scholar] [CrossRef]
  15. Donghui, L.; Fukushima, M. A globally and superlinearly convergent gauss–Newton-based BFGS method for symmetric nonlinear equations. SIAM J. Numer. Anal. 1999, 37, 152–172. [Google Scholar]
  16. Waziri, M.Y.; Yusuf, A.; Abubakar, A.B. Improved conjugate gradient method for nonlinear system of equations. Comput. Appl. Math. 2020, 39, 1–17. [Google Scholar] [CrossRef]
  17. Yusuf, A.; Adamu, A.K.; Lawal, L.; Ibrahim, A.K. A Hybrid Conjugate Gradient Algorithm for Nonlinear System of Equations through Conjugacy Condition. In Proceedings of the Artificial Intelligence and Applications, Wuhan, China, 18–20 November 2023. [Google Scholar]
  18. Abubakar, A.B.; Kumam, P.; Mohammad, H.; Awwal, A.M. An Efficient Conjugate Gradient Method for Convex Constrained Monotone Nonlinear Equations with Applications. Mathematics 2019, 7, 767. [Google Scholar] [CrossRef]
  19. Zhifeng, D.; Huan, Z. A modified Hestenes-Stiefel-type derivative-free method for large-scale nonlinear monotone equations. Mathematics 2020, 8, 168. [Google Scholar] [CrossRef]
  20. Awwal, A.M.; Ishaku, A.; Halilu, A.S.; Stanimirović, P.S.; Pakkaranang, N.; Panyanak, B. Descent Derivative-Free Method Involving Symmetric Rank-One Update for Solving Convex Constrained Nonlinear Monotone Equations and Application to Image Recovery. Symmetry 2022, 14, 2375. [Google Scholar] [CrossRef]
  21. Abubakar, A.B.; Kumam, P.; Awwal, A.M.; Thounthong, P. A modified self-adaptive conjugate gradient method for solving convex constrained monotone nonlinear equations for signal recovery problems. Mathematics 2019, 7, 693. [Google Scholar] [CrossRef]
  22. Sabi’u, J.; Muangchoo, K.; Shah, A.; Abubakar, A.B.; Jolaoso, L.O. A modified PRP-CG type derivative-free algorithm with optimal choices for solving large-scale nonlinear symmetric equations. Symmetry 2021, 13, 234. [Google Scholar] [CrossRef]
  23. Awwal, A.M.; Wang, L.; Kumam, P.; Mohammad, H.; Watthayu, W. A projection Hestenes–Stiefel method with spectral parameter for nonlinear monotone equations and signal processing. Math. Comput. Appl. 2020, 25, 27. [Google Scholar] [CrossRef]
  24. Sulaiman, I.M.; Awwal, A.M.; Malik, M.; Pakkaranang, N.; Panyanak, B. A derivative-free mzprp projection method for convex constrained nonlinear equations and its application in compressive sensing. Mathematics 2022, 10, 2884. [Google Scholar] [CrossRef]
  25. Sabi’u, J.; Aremu, K.O.; Althobaiti, A.; Shah, A. Scaled three-term conjugate gradient methods for solving monotone equations with application. Symmetry 2022, 14, 936. [Google Scholar] [CrossRef]
  26. Fletcher, R. Practical methods of optimization. In Unconstrained Optimization, 1st ed.; Wiley: New York, NY, USA, 1987; Volume 1. [Google Scholar]
  27. Yan, Q.R.; Peng, X.Z.; Li, D.H. A globally convergent derivative-free method for solving large-scale nonlinear monotone equations. J. Comput. Appl. Math. 2010, 234, 649–657. [Google Scholar] [CrossRef]
  28. Solodov, M.; Svaiter, B. A globally convergent inexact Newton method for systems of monotone equations. In Reformulation: Nonsmooth, Piecewise Smooth, Semismooth and Smoothing Methods; Springer: Berlin/Heidelberg, Germany, 1998; pp. 355–369. [Google Scholar]
  29. Koorapetse, M. A new three-term conjugate gradient-based projection method for solving large-scale nonlinear monotone equations. Math. Model. Anal. 2019, 24, 550–563. [Google Scholar] [CrossRef]
  30. Abubakar, A.B.; Kumam, P.; Ibrahim, A.H.; Chaipunya, P.; Rano, S.A. New hybrid three-term spectral-conjugate gradient method for finding solutions of nonlinear monotone operator equations with applications. Math. Comput. Simul. 2021, 201, 670–683. [Google Scholar] [CrossRef]
  31. Jie, G.; Zhong, W. A new three-term conjugate gradient algorithm with modified gradient-differences for solving unconstrained optimization problems. Methods 2023, 2, 12. [Google Scholar]
  32. Aji, S.; Kumam, P.; Siricharoen, P.; Abubakar, A.B.; Yahaya, M.M. A Modified Conjugate Descent Projection Method for Monotone Nonlinear Equations and Image Restoration. IEEE Access 2020, 8, 158656–158665. [Google Scholar] [CrossRef]
  33. Yuan, G.; Li, T.; Hu, W. A conjugate gradient algorithm for large-scale nonlinear equations and image restoration problems. Appl. Numer. Math. 2020, 147, 129–141. [Google Scholar] [CrossRef]
  34. La Cruz, W.; Martínez, J.; Raydan, M. Spectral residual method without gradient information for solving large-scale nonlinear systems of equations. Math. Comput. 2006, 75, 1429–1448. [Google Scholar] [CrossRef]
  35. Bing, Y.; Lin, G. An efficient implementation of Merrill’s method for sparse or partially separable systems of nonlinear equations. SIAM J. Optim. 1991, 1, 206–221. [Google Scholar] [CrossRef]
  36. Abubakar, A.B.; Kumam, P.; Mohammad, H. A note on the spectral gradient projection method for nonlinear monotone equations with applications. Comput. Appl. Math. 2020, 39, 129. [Google Scholar] [CrossRef]
  37. Ding, Y.; Xiao, Y.H.; Li, J. A class of conjugate gradient methods for convex constrained monotone equations. Optimization 2017, 66, 2309–2328. [Google Scholar] [CrossRef]
  38. Dolan, E.D.; Moré, J.J. Benchmarking optimization software with performance profiles. Math. Program. 2002, 91, 201–213. [Google Scholar] [CrossRef]
  39. Figueiredo, M.A.T.; Nowak, R.D. An EM algorithm for wavelet-based image restoration. IEEE Trans. Image Process. 2003, 12, 906–916. [Google Scholar] [CrossRef]
  40. Beck, A.; Teboulle, M. A fast iterative shrinkage-thresholding algorithm for linear inverse problems. SIAM J. Imaging Sci. 2009, 2, 183–202. [Google Scholar] [CrossRef]
  41. Figueiredo, M.A.T.; Nowak, R.D.; Wright, S.J. Gradient projection for sparse reconstruction: Application to compressed sensing and other inverse problems. IEEE J. Sel. Top. Signal Process. 2007, 1, 586–597. [Google Scholar] [CrossRef]
  42. Van Den Berg, E.; Friedlander, M.P. Probing the Pareto frontier for basis pursuit solutions. SIAM J. Sci. Comput. 2008, 31, 890–912. [Google Scholar] [CrossRef]
  43. Birgin, E.G.; Martínez, J.M.; Raydan, M. Nonmonotone spectral projected gradient methods on convex sets. SIAM J. Optim. 2000, 10, 1196–1211. [Google Scholar] [CrossRef]
  44. Xiao, Y.; Wang, Q.; Hu, Q. Non-smooth equations based method for 1-norm problems with applications to compressed sensing. Nonlinear Anal. Theory Methods Appl. 2011, 74, 3570–3577. [Google Scholar] [CrossRef]
  45. Pang, J.S. Inexact Newton methods for the nonlinear complementarity problem. Math. Program. 1986, 36, 54–71. [Google Scholar] [CrossRef]
  46. Liu, J.K.; Li, S.J. A projection method for convex constrained monotone nonlinear equations with applications. Comput. Math. Appl. 2015, 70, 2442–2453. [Google Scholar] [CrossRef]
  47. Gao, P.; He, C.; Liu, Y. An adaptive family of projection methods for constrained monotone nonlinear equations with applications. Appl. Math. Comput. 2019, 359, 1–16. [Google Scholar] [CrossRef]
Figure 1. Performance profile for Dolan and Moré in terms of number of iterations.
Figure 1. Performance profile for Dolan and Moré in terms of number of iterations.
Mathematics 12 01649 g001
Figure 2. Performance profile for Dolan and Moré in terms of number of function evaluations.
Figure 2. Performance profile for Dolan and Moré in terms of number of function evaluations.
Mathematics 12 01649 g002
Figure 3. Performance profile for Dolan and Moré in terms of time(s).
Figure 3. Performance profile for Dolan and Moré in terms of time(s).
Mathematics 12 01649 g003
Figure 4. A sparse signal is recovered. The original signal (First plot), measurement (Second plot) and reconstructed signals by TTCD (Third plot), PCG (Fourth plot), Algorithm 2.1a (Fifth plot) and Algorithm 2.1b (Sixth plot) are shown in order from top to bottom.
Figure 4. A sparse signal is recovered. The original signal (First plot), measurement (Second plot) and reconstructed signals by TTCD (Third plot), PCG (Fourth plot), Algorithm 2.1a (Fifth plot) and Algorithm 2.1b (Sixth plot) are shown in order from top to bottom.
Mathematics 12 01649 g004
Figure 5. Comparison results of TTCD, PCG and Algorithm 2.1a,b. From left to right: the changed trend of MSE goes along with the number of iterations or CPU time in seconds, and the changed trend of the objective function values accompanies the number of iterations or CPU time in seconds.
Figure 5. Comparison results of TTCD, PCG and Algorithm 2.1a,b. From left to right: the changed trend of MSE goes along with the number of iterations or CPU time in seconds, and the changed trend of the objective function values accompanies the number of iterations or CPU time in seconds.
Mathematics 12 01649 g005
Table 2. Numerical results for the problem with serial number 1 in Table 1.
Table 2. Numerical results for the problem with serial number 1 in Table 1.
TTCDPRPFRMHSMCDPM
Dimension Initial Point NOI NFE TIME Norm NOI NFE TIME Norm NOI NFE TIME Norm NOI NFE TIME Norm
1000 x 1 170.39476029870.464989.59 × 10 6 7220.264289.24 × 10 6 18740.0362518.83 × 10 7
x 2 170.042189031940.141649.97 × 10 6 7230.0312637.02 × 10 6 9380.0214988.36 × 10 7
x 3 170.0178690341020.0577079.84 × 10 6 7220.0176493.88 × 10 6 160.070970.00 × 10
x 4 190.010377031930.0582018.64 × 10 6 21650.045096.29 × 10 6 160.0501590.00 × 10
x 5 1100.0107730140.0080980160.008276017700.378227.83 × 10 7
5000 x 1 170.32858028854.31119.63 × 10 6 7230.416939.97 × 10 6 17690.60419.69 × 10 7
x 2 170.25391031932.14368.42 × 10 6 8260.183442.12 × 10 6 10410.727359.79 × 10 7
x 3 170.185810331001.05089.92 × 10 6 7220.730227.23 × 10 6 160.19160.00 × 10
x 4 190.1402031941.02529.95 × 10 6 22680.917426.76 × 10 6 160.0257970.00 × 10
x 5 1100.375670140.319810160.090721017700.157277.83 × 10 7
10,000 x 1 170.16721028843.16539.29 × 10 6 8261.07061.92 × 10 6 16660.679789.62 × 10 7
x 2 170.22336030912.6889.63 × 10 6 8260.991032.99 × 10 6 10420.599153.62 × 10 7
x 3 170.18297033991.1569.54 × 10 6 7220.74479.96 × 10 6 160.367020.00 × 10
x 4 190.18923032961.62688.56 × 10 6 22681.8139.55 × 10 6 160.245640.00 × 10
x 5 1100.445870140.291060160.13975017700.696217.83 × 10 7
50,000 x 1 170.26981028844.67987.80 × 10 6 8261.20194.28 × 10 6 15620.739168.85 × 10 7
x 2 170.38483030903.88248.36 × 10 6 8261.18296.68 × 10 6 10420.879648.11 × 10 7
x 3 170.33872032973.34279.79 × 10 6 8261.33011.44 × 10 6 160.396710.00 × 10
x 4 190.38542033993.61487.81 × 10 6 23723.13158.72 × 10 6 160.446350.00 × 10
x 5 1100.374610140.342980160.23315017701.7997.83 × 10 7
100,000 x 1 170.46499027824.81919.68 × 10 6 8261.28726.06 × 10 6 15611.25568.80 × 10 7
x 2 170.4386029885.08019.95 × 10 6 8261.02339.45 × 10 6 11450.998895.96 × 10 7
x 3 170.42428032965.1539.54 × 10 6 8261.24532.03 × 10 6 17702.35637.83 × 10 7
x 4 190.75986033994.76718.20 × 10 6 24743.48837.01 × 10 6 15612.56228.80 × 10 7
x 5 1100.825690140.304690160.44519011451.87465.96 × 10 7
Table 3. Numerical results for the problem with serial number 2 in Table 1.
Table 3. Numerical results for the problem with serial number 2 in Table 1.
TTCDPRPFRMHSMCDPM
Dimension Initial Point NOI NFE TIME Norm NOI NFE TIME Norm NOI NFE TIME Norm NOI NFE TIME Norm
x 1 5170.046882.69 × 10 6 411220.095868.12 × 10 6 10290.026156.78 × 10 6 8230.020718.24 × 10 6
1000 x 1 5170.09572.69 × 10 6 411220.0865918.12 × 10 6 10290.0354756.78 × 10 6 8230.0206118.24 × 10 7
x 2 5170.0182143.77 × 10 6 421250.0887168.01 × 10 6 10290.024798.75 × 10 6 8230.0202546.96 × 10 7
x 3 5160.018022.52 × 10 6 351040.075539.94 × 10 6 10280.025614.63 × 10 6 8240.0255113.73 × 10 7
x 4 5160.0179933.74 × 10 6 361060.0791688.80 × 10 6 10280.0265815.12 × 10 6 10300.0317194.77 × 10 7
x 5 5160.0171774.40 × 10 6 411200.0808829.67 × 10 6 12330.0315714.69 × 10 6 10300.0254754.77 × 10 7
5000 x 1 5170.0517855.33 × 10 6 431293.74119.94 × 10 6 11321.40184.20 × 10 6 8240.802663.29 × 10 7
x 2 5170.444547.48 × 10 6 441321.29919.84 × 10 6 11321.6145.45 × 10 6 8240.832062.91 × 10 7
x 3 5160.50184.77 × 10 6 381121.129.91 × 10 6 10280.732269.91 × 10 6 8240.938383.52 × 10 7
x 4 5160.626366.82 × 10 6 391163.08349.78 × 10 6 10290.95789.47 × 10 6 11321.03695.29 × 10 7
x 5 5160.678117.11 × 10 6 441291.36129.31 × 10 6 12341.23897.14 × 10 6 11320.185475.29 × 10 7
10,000 x 1 390.319476.46 × 10 6 451341.64648.54 × 10 6 11320.970575.94 × 10 6 8240.927054.58 × 10 7
x 2 6190.901931.78 × 10 6 461372.65078.46 × 10 6 11321.20037.69 × 10 6 8240.990624.07 × 10 7
x 3 5160.740916.60 × 10 6 401183.57428.06 × 10 6 10291.2869.76 × 10 6 8241.05363.49 × 10 7
x 4 5160.351929.38 × 10 6 411211.93188.57 × 10 6 11311.1265.46 × 10 6 11321.12067.47 × 10 7
x 5 5160.61339.62 × 10 6 451321.6919.98 × 10 6 13361.12174.03 × 10 6 11321.12237.47 × 10 7
50,000 x 1 5170.93714.25 × 10 7 481436.24268.38 × 10 6 11331.66649.28 × 10 6 9261.32625.06 × 10 7
x 2 6190.869413.93 × 10 6 491464.77698.30 × 10 6 12351.65824.81 × 10 6 8241.39169.03 × 10 7
x 3 5171.0754.28 × 10 7 421254.86759.85 × 10 6 11311.69668.69 × 10 6 8241.263.47 × 10 7
x 4 5170.963446.06 × 10 7 441304.28128.54 × 10 6 11321.56938.67 × 10 6 11331.81483.50 × 10 7
x 5 5171.02746.12 × 10 7 481414.99179.78 × 10 6 13362.02018.98 × 10 6 11332.12123.50 × 10 7
100,000 x 1 5171.2895.99 × 10 7 491466.89589.01 × 10 6 12352.44725.25 × 10 6 9261.90557.14 × 10 7
x 2 6191.39075.55 × 10 6 501495.88928.92 × 10 6 12352.46886.80 × 10 6 9262.04986.38 × 10 7
x 3 5171.25536.03 × 10 7 441305.9878.47 × 10 6 11322.52038.60 × 10 6 8241.62863.47 × 10 7
x 4 5171.56488.53 × 10 7 451336.28889.20 × 10 6 12342.60344.91 × 10 6 11332.73624.94 × 10 7
x 5 5171.70778.60 × 10 7 491456.74059.98 × 10 6 13372.68978.88 × 10 6 11332.7564.94 × 10 7
Table 4. Numerical results for the problem with serial number 3 in Table 1.
Table 4. Numerical results for the problem with serial number 3 in Table 1.
TTCDPRPFRMHSMCDPM
Dimension Initial Point NOI NFE TIME Norm NOI NFE TIME Norm NOI NFE TIME Norm NOI NFE TIME Norm
1000 x 1 4130.0138927.77 × 10 6 471410.0922298.32 × 10 6 10310.0232589.32 × 10 6 150.0071930
x 2 5150.0342162.72 × 10 6 491470.0897839.57 × 10 6 11330.0245577.35 × 10 6 150.0072930
x 3 4120.0137682.51 × 10 6 521570.0982419.73 × 10 6 12360.0294414.66 × 10 6 150.0070140
x 4 11350.0254436.74 × 10 6 561680.102048.28 × 10 6 11330.0251797.77 × 10 6 150.0065280
x 5 11350.024956.74 × 10 6 561680.0997418.28 × 10 6 13390.0338716.13 × 10 6 150.008670
5000 x 1 5150.0424583.48 × 10 6 501504.44358.16 × 10 6 11330.806068.34 × 10 6 150.177260
x 2 5150.043736.08 × 10 6 521561.3919.40 × 10 6 12360.936364.60 × 10 6 150.141270
x 3 4120.107845.62 × 10 6 551661.56969.55 × 10 6 12371.41077.29 × 10 6 150.131050
x 4 12370.272145.43 × 10 6 591771.32278.13 × 10 6 12360.974164.87 × 10 6 150.247730
x 5 12370.138435.43 × 10 6 591773.10868.13 × 10 6 13400.993929.60 × 10 6 150.141510
10,000 x 1 5150.217524.92 × 10 6 511533.04968.77 × 10 6 11341.0178.26 × 10 6 150.153590
x 2 5150.109928.60 × 10 6 531601.61679.59 × 10 6 12360.91336.51 × 10 6 150.142560
x 3 4120.223897.95 × 10 6 571713.94758.21 × 10 6 13391.18914.12 × 10 6 150.128530
x 4 12370.206617.67 × 10 6 601802.92488.73 × 10 6 12361.38156.88 × 10 6 150.180350
x 5 12370.124267.67 × 10 6 601802.3378.73 × 10 6 14420.855815.43 × 10 6 150.0961990
50,000 x 1 5160.409742.20 × 10 6 541626.03998.61 × 10 6 12360.966637.38 × 10 6 150.131930
x 2 5160.324483.85 × 10 6 561685.02299.91 × 10 6 13391.03614.08 × 10 6 150.222660
x 3 4130.27273.56 × 10 6 601805.77028.06 × 10 6 13391.32919.22 × 10 6 150.203160
x 4 13400.884173.98 × 10 6 631895.81268.57 × 10 6 13391.9284.31 × 10 6 150.223150
x 5 13401.27913.98 × 10 6 631895.69728.57 × 10 6 14432.05368.50 × 10 6 150.184850
100,000 x 1 5162.60193.11 × 10 6 551657.24949.26 × 10 6 12372.59017.31 × 10 6 150.308370
x 2 5160.506775.44 × 10 6 581746.2568.10 × 10 6 13392.19325.77 × 10 6 150.24160
x 3 4130.820865.03 × 10 6 611836.90188.66 × 10 6 13402.00529.13 × 10 6 150.225730
x 4 13401.80955.63 × 10 6 641927.38019.21 × 10 6 13392.32596.10 × 10 6 150.295030
x 5 13401.60285.63 × 10 6 641927.03499.21 × 10 6 15452.88824.81 × 10 6 150.270450
Table 5. Numerical results for the problem with serial number 4 in Table 1.
Table 5. Numerical results for the problem with serial number 4 in Table 1.
TTCDPRPFRMHSMCDPM
Dimension Initial Point NOI NFE TIME Norm NOI NFE TIME Norm NOI NFE TIME Norm NOI NFE TIME Norm
1000 x 1 120.0149770120.0085530120.008269020810.0635659.90 × 10 7
x 2 120.0261760120.0086590120.0082840130.007560
x 3 120.0104770120.0079170120.0097840130.0087260
x 4 110.0096760140.0085730110.00731409260.0289046.19 × 10 7
x 5 110.0097580140.0079230110.00778709260.0262486.19 × 10 7
5000 x 1 120.154030120.244110120.079542022882.27237.13 × 10 7
x 2 120.07790120.307310120.0933450130.126920
x 3 120.125470120.258370120.124910130.0928580
x 4 110.0224940140.260940110.01749609271.31012.76 × 10 7
x 5 110.0363420140.355340110.01299809271.01752.76 × 10 7
10,000 x 1 120.100490120.270880120.18825022891.31076.62 × 10 7
x 2 120.141110120.340030120.119430130.151950
x 3 120.114870120.561460120.132150130.136220
x 4 110.0317350140.333980110.01728909271.01153.91 × 10 7
x 5 110.0335180140.402470110.03709809271.15613.91 × 10 7
50,000 x 1 120.169770120.435380120.23996023934.23096.81 × 10 7
x 2 120.234950120.441270120.177260130.238540
x 3 120.268510120.297470120.239860130.143830
x 4 110.065720140.291840110.05629809271.65388.74 × 10 7
x 5 110.095060140.468690110.08344809271.59798.74 × 10 7
100,000 x 1 120.452740120.541850120.28671023935.66239.63 × 10 7
x 2 120.330820120.419990120.313840130.299810
x 3 120.331480120.438230120.35070130.244540
x 4 110.147230140.426490110.15859010292.45836.18 × 10 7
x 5 110.143680140.463970110.095907010292.76126.18 × 10 7
Table 6. Numerical results for the problem with serial number 5 in Table 1.
Table 6. Numerical results for the problem with serial number 5 in Table 1.
TTCDPRPFRMHSMCDPM
Dimension Initial Point NOI NFE TIME Norm NOI NFE TIME Norm NOI NFE TIME Norm NOI NFE TIME Norm
1000 x 1 140.019060461390.0763679.75 × 10 6 10310.0220367.77 × 10 6 21850.131867.15 × 10 7
x 2 140.0108460491470.0776978.44 × 10 6 11330.0220875.05 × 10 6 21850.0345487.15 × 10 7
x 3 280.0388470511540.0789879.57 × 10 6 11330.0211674.15 × 10 6 160.0167590.00 × 10
x 4 170.0147360531590.0809369.18 × 10 6 150.0077630150.0200920.00 × 10
x 5 190.0099230521560.0806069.60 × 10 6 471430.0720799.29 × 10 6 150.0130082.22 × 10 16
5000 x 1 140.100860491482.62319.57 × 10 6 11330.47426.95 × 10 6 22890.791187.38 × 10 7
x 2 140.102320521562.94978.29 × 10 6 11340.842497.90 × 10 6 22890.66617.38 × 10 7
x 3 280.275520541621.54439.89 × 10 6 11331.14569.28 × 10 6 160.242110.00 × 10
x 4 170.29150561682.67529.01 × 10 6 150.156030150.0266220.00 × 10
x 5 190.214870551651.4419.42 × 10 6 501521.64098.44 × 10 6 150.070962.22 × 10 16
10,000 x 1 140.165750511533.19458.23 × 10 6 11331.07459.83 × 10 6 23920.204737.31 × 10 7
x 2 140.14880531591.8948.91 × 10 6 12360.612854.47 × 10 6 23920.223697.31 × 10 7
x 3 280.335830561681.77968.08 × 10 6 11341.1549.19 × 10 6 160.260870.00 × 10
x 4 170.173160571713.78379.68 × 10 6 150.0408280150.0543970.00 × 10
x 5 190.270090561693.18479.62 × 10 6 511551.99678.85 × 10 6 150.225792.22 × 10 16
50,000 x 1 140.197450541624.86438.08 × 10 6 12361.22126.16 × 10 6 24961.32667.52 × 10 7
x 2 140.151770561684.45898.75 × 10 6 12361.41279.99 × 10 6 24960.989727.52 × 10 7
x 3 280.381920581755.00839.91 × 10 6 12361.63278.22 × 10 6 160.347230.00 × 10
x 4 170.340860601804.90779.50 × 10 6 150.182480150.0936250.00 × 10
x 5 190.370250591775.09559.94 × 10 6 541645.39718.04 × 10 6 150.0959852.22 × 10 16
100,000 x 1 140.371030551656.38168.68 × 10 6 12361.62378.71 × 10 6 24972.28596.99 × 10 7
x 2 140.228230571715.71329.40 × 10 6 12371.83719.89 × 10 6 24971.69166.99 × 10 7
x 3 280.549860601805.80328.52 × 10 6 12372.04158.13 × 10 6 150.227412.22 × 10 16
x 4 170.389990611846.77079.70 × 10 6 150.35288024974.09116.99 × 10 7
x 5 190.545380611835.38868.12 × 10 6 551677.22888.43 × 10 6 24973.57396.99 × 10 7
Table 7. Numerical results for the problem with serial number 6 in Table 1.
Table 7. Numerical results for the problem with serial number 6 in Table 1.
TTCDPRPFRMHSMCDPM
Dimension Initial Point NOI NFE TIME Norm NOI NFE TIME Norm NOI NFE TIME Norm NOI NFE TIME Norm
1000 x 1 20610.100166.95 × 10 6 591770.139488.09 × 10 6 281130.0615626.96 × 10 7
x 2 20610.0440034.82 × 10 6 581750.123079.72 × 10 6 100140072.17892.0736
x 3 20630.0787286.21 × 10 6 581740.151259.02 × 10 6 100140072.22572.2094
x 4 26840.0951047.13 × 10 6 561680.128818.57 × 10 6 28850.0571958.82 × 10 6 13390.0306153.53 × 10 7
x 5 431370.12559.23 × 10 6 541620.121078.75 × 10 6 491490.0850258.88 × 10 6 391560.0787928.55 × 10 7
5000 x 1 23721.83656.89 × 10 6 611844.47729.93 × 10 6 46714077.96219.99 × 10 6 1001401215.15721.8695
x 2 24742.04287.36 × 10 6 611841.60819.55 × 10 6 100140078.49215.2384
x 3 22680.903215.53 × 10 6 611831.81188.86 × 10 6 100140075.52235.8303
x 4 28901.36336.81 × 10 6 591771.99348.42 × 10 6 31940.141818.73 × 10 6 16480.557794.96 × 10 7
x 5 31940.141818.73 × 10 6 571712.32798.60 × 10 6 541640.939088.26 × 10 6 411641.27639.01 × 10 7
10,000 x 1 24750.18599.83 × 10 6 631892.27628.54 × 10 6 1001401217.05122.9693
x 2 25770.964365.38 × 10 6 631892.46278.22 × 10 6 1001400713.2377.5161
x 3 22691.62728.21 × 10 6 621862.28219.52 × 10 6 1001400713.17328.3369
x 4 29932.73526.67 × 10 6 601802.39169.05 × 10 6 32980.250869.07 × 10 6 331220.952428.42 × 10 7
x 5 581832.96379.49 × 10 6 581742.81249.24 × 10 6 561701.11338.88 × 10 6 421681.95679.88 × 10 7
50,000 x 1 24753.0449.79 × 10 6 661987.67428.38 × 10 6 832497.20839.80 × 10 6 1001401349.09952.3302
x 2 752349.37158.53 × 10 6 661984.47658.06 × 10 6 1001400746.118116.9985
x 3 23712.66014.91 × 10 6 651955.57879.35 × 10 6 1001400745.850118.8076
x 4 31993.41979.01 × 10 6 631895.80948.89 × 10 6 351071.23428.98 × 10 6 1001400446.16481.9526
x 5 782448.80839.48 × 10 6 611836.22069.07 × 10 6 611853.64879.97 × 10 6 471873.46188.88 × 10 7
100,000 x 1 25773.85543.88 × 10 6 672019.31439.01 × 10 6 862579.93018.88 × 10 6 1001401389.28035.5249
x 2 672018.44528.67 × 10 6 658198048.18379.99 × 10 6 1001400787.576724.0734
x 3 23711.62176.80 × 10 6 661998.07069.54 × 10 6 1001400787.077426.6312
x 4 321023.57719.87 × 10 6 641928.11879.55 × 10 6 371122.53426.74 × 10 6 1001400487.31232.9699
x 5 7623910.72249.84 × 10 6 621867.89899.75 × 10 6 641946.24058.21 × 10 6 491966.07368.31 × 10 7
Table 8. Numerical results for the problem with serial number 7 in Table 1.
Table 8. Numerical results for the problem with serial number 7 in Table 1.
TTCDPRPFRMHSMCDPM
Dimension Initial Point NOI NFE TIME Norm NOI NFE TIME Norm NOI NFE TIME Norm NOI NFE TIME Norm
1000 x 1 9300.0417243.51 × 10 6 7180.0537599.06 × 10 6 13390.0422777.68 × 10 6 21850.0565567.81 × 10 7
x 2 9300.0322272.07 × 10 6 7180.0180749.06 × 10 6 13390.0367897.38 × 10 6 22880.0538548.33 × 10 7
x 3 7250.0258973.53 × 10 6 7180.0179569.06 × 10 6 13390.0350826.50 × 10 6 22880.0569238.64 × 10 7
x 4 7240.0359283.30 × 10 6 7180.0172679.06 × 10 6 12370.0450488.93 × 10 6 22880.0575597.12 × 10 7
x 5 5190.0747495.32 × 10 6 7180.017429.06 × 10 6 12360.0368637.52 × 10 6 22880.062167.12 × 10 7
5000 x 1 6210.0738937.59 × 10 6 6160.742036.77 × 10 7 14421.59854.81 × 10 6 22891.7858.05 × 10 7
x 2 6210.0675127.61 × 10 6 6160.756256.77 × 10 7 14421.3994.63 × 10 6 23921.6688.58 × 10 7
x 3 6210.0674767.57 × 10 6 6161.13596.77 × 10 7 14421.20574.08 × 10 6 23920.960438.91 × 10 7
x 4 6210.0711185.83 × 10 6 6160.406136.77 × 10 7 13390.804648.00 × 10 6 23921.33927.33 × 10 7
x 5 6210.0703433.94 × 10 6 6160.789326.77 × 10 7 13391.33594.72 × 10 6 23921.74177.33 × 10 7
10,000 x 1 7240.133633.32 × 10 6 7190.783235.87 × 10 6 14421.53846.81 × 10 6 23921.73327.97 × 10 7
x 2 7240.140053.43 × 10 6 7190.950835.87 × 10 6 14421.51116.55 × 10 6 23932.247.98 × 10 7
x 3 6210.114468.67 × 10 6 7191.53815.87 × 10 6 14421.40815.77 × 10 6 23931.46258.28 × 10 7
x 4 5180.111369.31 × 10 6 7191.02125.87 × 10 6 13401.0897.92 × 10 6 23931.20646.81 × 10 7
x 5 5180.136666.22 × 10 6 7191.65655.87 × 10 6 13391.32396.67 × 10 6 23930.540216.81 × 10 7
50,000 x 1 5190.437682.43 × 10 6 12351.4428.08 × 10 6 15451.72954.26 × 10 6 24962.27958.19 × 10 7
x 2 5190.419692.39 × 10 6 12352.45528.08 × 10 6 15451.83594.10 × 10 6 24972.96658.20 × 10 7
x 3 5190.420382.26 × 10 6 12351.70988.08 × 10 6 14431.8929.03 × 10 6 24972.87918.52 × 10 7
x 4 5180.408799.62 × 10 6 12351.99678.08 × 10 6 14422.25157.09 × 10 6 24973.50147.01 × 10 7
x 5 5180.413765.67 × 10 6 12352.30758.08 × 10 6 14422.37394.18 × 10 6 24973.36547.01 × 10 7
100,000 x 1 5190.76872.76 × 10 6 18553.68329.84 × 10 6 15452.89956.03 × 10 6 24974.89717.61 × 10 7
x 2 5191.06462.66 × 10 6 18553.22249.84 × 10 6 15452.85585.80 × 10 6 251004.59978.12 × 10 7
x 3 5190.823462.35 × 10 6 18553.19089.84 × 10 6 15452.5855.11 × 10 6 251004.81368.43 × 10 7
x 4 5191.14971.31 × 10 6 18553.29919.84 × 10 6 14432.60587.01 × 10 6 24974.88789.91 × 10 7
x 5 5180.79118.01 × 10 6 18553.42649.84 × 10 6 14422.75035.91 × 10 6 24974.68769.91 × 10 7
Table 9. Numerical results for the problem with serial number 8 in Table 1.
Table 9. Numerical results for the problem with serial number 8 in Table 1.
TTCDPRPFRMHSMCDPM
Dimension Initial Point NOI NFE TIME Norm NOI NFE TIME Norm NOI NFE TIME Norm NOI NFE TIME Norm
1000 x 1 7260.0381924.69 × 10 6 13400.0343558.78 × 10 6 10310.0263143.45 × 10 6 17700.0742828.77 × 10 7
x 2 7260.0276514.12 × 10 6 13390.0253737.19 × 10 6 9290.0223467.25 × 10 6 11460.0256415.26 × 10 7
x 3 6230.0177751.42 × 10 6 13390.0241136.85 × 10 6 7230.0185386.02 × 10 6 11470.0269053.38 × 10 7
x 4 7260.0221345.98 × 10 6 15450.0267035.54 × 10 6 10310.0256278.05 × 10 6 12510.026927.22 × 10 7
x 5 8280.0204083.31 × 10 6 15450.024987.95 × 10 6 10300.0241563.84 × 10 6 11470.0263378.31 × 10 7
5000 x 1 8290.100791.45 × 10 6 14421.89748.52 × 10 6 10311.13977.72 × 10 6 11461.08935.78 × 10 7
x 2 7260.0622459.22 × 10 6 14420.945255.16 × 10 6 10310.960325.28 × 10 6 15630.931475.24 × 10 7
x 3 6230.0599513.18 × 10 6 14421.13884.92 × 10 6 8250.995294.39 × 10 6 12510.81473.59 × 10 7
x 4 8290.068461.85 × 10 6 15460.818099.15 × 10 6 11341.19853.44 × 10 6 11460.965726.44 × 10 7
x 5 8280.0661977.40 × 10 6 16481.18765.71 × 10 6 10300.87078.60 × 10 6 11460.957895.72 × 10 7
10,000 x 1 8290.109142.06 × 10 6 14430.877158.91 × 10 6 10320.894776.40 × 10 6 12500.972865.16 × 10 7
x 2 8290.121.81 × 10 6 14421.26387.30 × 10 6 10311.14627.47 × 10 6 14581.17147.59 × 10 7
x 3 6230.10014.50 × 10 6 14421.58776.96 × 10 6 8250.371396.21 × 10 6 13551.07994.13 × 10 7
x 4 8290.37012.62 × 10 6 16481.9225.62 × 10 6 11341.05584.87 × 10 6 12501.42287.62 × 10 7
x 5 9310.120851.45 × 10 6 16481.41438.07 × 10 6 10310.230497.13 × 10 6 12501.1065.76 × 10 7
50,000 x 1 8290.490174.60 × 10 6 15452.6198.66 × 10 6 11341.35964.67 × 10 6 12511.83096.78 × 10 7
x 2 8290.453134.04 × 10 6 15452.55535.24 × 10 6 10321.30029.80 × 10 6 19782.04139.64 × 10 7
x 3 7260.394361.39 × 10 6 15452.17025.00 × 10 6 8261.26998.14 × 10 6 16671.88722.99 × 10 7
x 4 8290.439325.85 × 10 6 16492.52999.30 × 10 6 11351.89066.39 × 10 6 12511.55177.17 × 10 7
x 5 9310.514743.24 × 10 6 17511.60115.80 × 10 6 11331.47395.20 × 10 6 12501.8586.12 × 10 7
100,000 x 1 8290.867886.50 × 10 6 15462.62869.05 × 10 6 11342.28416.60 × 10 6 13552.66124.51 × 10 7
x 2 8290.867635.71 × 10 6 15452.80497.41 × 10 6 11342.11744.52 × 10 6 13552.86564.36 × 10 7
x 3 7260.766831.97 × 10 6 15452.28227.07 × 10 6 9282.00243.75 × 10 6 18753.92862.65 × 10 7
x 4 8291.24938.28 × 10 6 17513.22535.71 × 10 6 11352.36219.03 × 10 6 16663.23614.79 × 10 7
x 5 9310.90514.59 × 10 6 17512.01188.20 × 10 6 11332.36177.35 × 10 6 16662.66184.97 × 10 7
Table 10. Experimental findings regarding the signal recovery problems.
Table 10. Experimental findings regarding the signal recovery problems.
TTCDPCGAlgorithm 2.1aAlgorithm 2.1b
S/N TIME IT MSE TIME IT MSE TIME IT MSE TIME IT MSE
12.311312.85 × 10 4 2.721502.20 × 10 3 3.111762.77 × 10 4 2.411363.01 × 10 4
23.02972.81 × 10 5 3.971194.90 × 10 5 4.581385.03 × 10 5 4.701324.05 × 10 5
33.051781.01 × 10 5 3.351862.70 × 10 5 3.751771.05 × 10 5 2.381251.35 × 10 5
42.921053.18 × 10 5 4.081333.98 × 10 5 4.611213.18 × 10 5 5.091462.92 × 10 5
52.721824.55 × 10 3 3.001842.05 × 10 3 3.612062.33 × 10 3 2.831722.43 × 10 3
61.84931.71 × 10 3 3.021289.50 × 10 4 3.541231.17 × 10 3 4.631532.68 × 10 3
73.541102.98 × 10 5 5.191404.31 × 10 5 4.591354.21 × 10 5 4.771693.74 × 10 5
83.311015.47 × 10 5 3.571274.85 × 10 3 4.731394.37 × 10 3 5.821453.96 × 10 3
92.111451.71 × 10 3 2.661524.09 × 10 3 3.801711.48 × 10 3 2.251591.47 × 10 3
102.34912.21 × 10 5 4.011243.29 × 10 5 3.391113.07 × 10 5 2.651422.89 × 10 5
Average2.716123.38.432 × 10 4 3.557144.31.433 × 10 3 3.971149.79.792 × 10 4 3.753147.91.099 × 10 3
Disclaimer/Publisher’s Note: The statements, opinions and data contained in all publications are solely those of the individual author(s) and contributor(s) and not of MDPI and/or the editor(s). MDPI and/or the editor(s) disclaim responsibility for any injury to people or property resulting from any ideas, methods, instructions or products referred to in the content.

Share and Cite

MDPI and ACS Style

Yusuf, A.; Manjak, N.H.; Aphane, M. A Modified Three-Term Conjugate Descent Derivative-Free Method for Constrained Nonlinear Monotone Equations and Signal Reconstruction Problems. Mathematics 2024, 12, 1649. https://doi.org/10.3390/math12111649

AMA Style

Yusuf A, Manjak NH, Aphane M. A Modified Three-Term Conjugate Descent Derivative-Free Method for Constrained Nonlinear Monotone Equations and Signal Reconstruction Problems. Mathematics. 2024; 12(11):1649. https://doi.org/10.3390/math12111649

Chicago/Turabian Style

Yusuf, Aliyu, Nibron Haggai Manjak, and Maggie Aphane. 2024. "A Modified Three-Term Conjugate Descent Derivative-Free Method for Constrained Nonlinear Monotone Equations and Signal Reconstruction Problems" Mathematics 12, no. 11: 1649. https://doi.org/10.3390/math12111649

Note that from the first issue of 2016, this journal uses article numbers instead of page numbers. See further details here.

Article Metrics

Back to TopTop