Abstract
In this paper, we propose, analyze, and test an alternative method for solving the -norm regularization problem for recovering sparse signals and blurred images in compressive sensing. The method is motivated by the recent proposed nonlinear conjugate gradient method of Tang, Li and Cui [Journal of Inequalities and Applications, 2020(1), 27] designed based on the least-squares technique. The proposed method aims to minimize a non-smooth minimization problem consisting of a least-squares data fitting term and an -norm regularization term. The search directions generated by the proposed method are descent directions. In addition, under the monotonicity and Lipschitz continuity assumption, we establish the global convergence of the method. Preliminary numerical results are reported to show the efficiency of the proposed method in practical computation.
Keywords:
image processing; compressed sensing; ℓ1-norm regularization; nonlinear equations; conjugate gradient method; projection method; global convergence MSC:
65L09; 65K05; 90C30
1. Introduction
Discrete ill-posed problems are systems of linear equations arising from the discretization of ill-posed problems. Consider the linear system
where is an original signal, is a linear map, and is an observed data. In particular, the original signal t is assumed to be sparse, that is, it has very few non-zero coefficients. Since the linear system (1) is usually referred to as ill-conditioned or under-determined problems. In compressive sensing (CS), it is possible to regain the sparse signal t from the linear system (1), by finding the solution of the -regularized problem
where denotes the nonzero components in t. Unfortunately, the -norm is NP-hard in general. Based on this, researchers have developed alternative model by replacing the -norm with -norm. Thus, they solved the following problem:
Under some mild assumptions, Donoho [1] proved that solution(s) of problem (2) also solves (3). In most applications, the observed value b usually contains some noise, thus problem (3) can be relaxed to the penalized least squares problem
where , balancing the trade-off between sparsity and residual error. Problems of the form (4) have become familiar over the past three decades, particularly in compressive sensing contexts. Interested readers may refer to the recent papers (Refs. [2,3]) for more details.
In order to address problem (4), several numerical algorithms have been proposed, for instance, Daubechies, Defrise, and Demol [4] proposed the iterative shrinkage thresholding (IST) algorithm; thereafter, Beck and Teboulle [5] proposed the fast iterative shrinkage thresholding algorithm (FISTA). These algorithms are well known due to their simplicity and efficiency. Likewise, Hale, Yin, and Zhang [6] proposed the fixed-point continuous search method [6], and an acceleration technique was introduced by nonmonotone line search with the Barzilai–Borwein stepsize [7]. He, Chang, and Osher [8] introduced the unconstrained formulation of -regularization problem, where the Bregman iterative approach was used to obtain the solutions to problem (4). The proximal forward backward splitting technique is another efficient technique for solving (4). This technique is based on the proximal operator introduced by Moreau [9]. Another type of method for solving the problem (4) is by using the gradient descent method. Quite recently, Figueiredo, Nowak, and Wright [10] first developed a gradient projection method to solve the sparse reconstruction problem. Thereafter, Xiao and Zhu [11,12] proposed a conjugate gradient projection method and a spectral gradient method to solve problem (4), respectively. Unlike IST and FISTA, in order to solve problem (4), the problem was first transformed into a monotone operator equation; see Section 2. Thereafter, an algorithm is developed to solve these systems of equations.
With the approximate equivalence between problem (4) and a system of monotone operator equations, one of the methods for solving the systems of nonlinear monotone operator equations is the conjugate gradient method. Considering the importance of the method, several extensions of this method have been proposed. The three-term conjugate gradient method happens to be one such extension. The first two three-term conjugate gradient method was introduced by Beale [13] and Nazareth [14] to weaken the condition of global convergence of the two-term conjugate gradient method. It is clear that, due to the existence of an additional parameter in the three-term conjugate gradient schemes, establishing the sufficient descent property is more accessible than the two-term conjugate gradient ones. To this end, the three-term conjugate gradient methods are presented, analyzed, and extensively studied in several references because of their advantages in the descent property and the computational performance. The references [15,16,17] have proposed different three-term conjugate gradient methods, and shown their specific properties, global convergence, and numerical performance. Tang, Li and Cui [18] presented a new three-term conjugate gradient approach based on the technique of the least-squares. Their approach incorporates the advantage of two existing efficient conjugate gradient approaches and also generates sufficient descent direction without the aid of a line search procedure. Preliminary numerical tests indicate that their method is efficient. Due to the simplicity and low storage requirements of the conjugate gradient method, numerous researchers have recently extended a number of conjugate gradient algorithms designed to solve unconstrained optimization problem to solve large-scale nonlinear equations. Using the popular CG_DESCENT method [19], Xiao and Zhu [12] recently constructed a conjugate gradient method (CGD) based on the projection scheme of Solodov and Svaiter [20] to solve monotone nonlinear operator equations with convex constraints. The method was also successfully used to solve the sparse signal in compressive sensing. Interested readers may refer to the following articles [21,22,23,24,25,26,27] for an overview of algorithms used for solving monotone operator equations.
Inspired by the work of Xiao and Zhu [12], the least-squares-based three-term conjugate gradient method (LSTT) for solving unconstrained optimization problems by Tang, Li, and Cui [18] and the projection technique of Solodov and Svaiter [20], we further study, analyze, and construct a derivative-free least-square-based three-term conjugate gradient method to solve the -norm problem arising from the reconstruction of sparse signal and image in compressive sensing. The method can be viewed as an extension of the LSTT method for solving unconstrained optimization problem and a projection technique. Under the monotonicity and Lipchitz continuity assumption, the global convergence of the proposed method is established using the backtracking line search. Computational experiments are carried out to reconstruct sparse signal and image in compressive sensing. The numerical results indicate that the proposed method is more efficient and robust.
The rest of the paper is organized as follows. In Section 2, we give a review of the reformulation of problem (4) into a convex quadratic program problem by Figueiredo et al. [10]. In Section 3, we present the motivation and general algorithm of the proposed method. The global convergence of the proposed algorithm is presented in Section 4. In Section 5, numerical experiments are presented to illustrate the efficiency of our algorithm. Unless otherwise stated, throughout this paper, the symbol denotes the Euclidean norm. Furthermore, the projection map denoted as which is a mapping from onto the non-empty, closed and convex subset that is,
which has the well known nonexpansive property, that is,
2. Reformulation of the Model
Figuredo, Nowak, and Wright [10] gave the reformulation of the minimization problem (4) into a quadratic programming problem as follows. Consider any vector , t can be rewritten as follows:
where and for all with Therefore, the -norm could be represented as where is an n-dimensional vector with all elements one. Thus, (4) was rewritten as
Moreover, from [10], with no difficulty, (6) can be rewritten as the quadratic program problem with box constraints. That is,
where , , , .
Simple calculation shows that H is a semi-definite positive matrix. Hence, (7) is a convex quadratic program problem, and it is equivalent to
3. Algorithm
Recently, Chunning, Shuangyu, and Zengru [18] proposed a new three-term conjugate gradient method based on the least-squares technique for solving the following unconstrained optimization problem:
where the function f is continuously differentiable from into R and the gradient is available. Similar to most conjugate gradient methods, the iterative scheme of the conjugate gradient method developed in [18] generates a sequence of iterates by letting
where is the steplength and the search direction is updated by
where and are scalar computed as
A valid approach for solving (8) is by using a derivative-free line search to ensure the step-size [29]. To this end, we present the following derivative-free least-square three term conjugate gradient projection algorithm (Algorithm 1).
| Algorithm 1 DF-LSTT |
Input. Choose any arbitrary initial point , the positive constants: , Step 0. Let and Step 1. Determine the step-size , where i is the smallest non-negative integer such that the following line search is satisfied: Step 2. Compute
Step 3. If and stop. Otherwise, compute the next iterate by
Step 4. If the stopping criterion is satisfied, that is, if stop. Otherwise, compute the next search direction by
Step 5. Finally, we set and return to step 1. |
Remark 1.
In order to ensure that the parameters and are well defined when extending them to solve (4), we re-modify their denominator using the scalar . Furthermore, to guarantee the boundness of the search direction, we assume the operator under consideration to be monotone rather than uniformly monotone as assumed in [18].
Lemma 1.
The search direction generated by DF-LSTT algorithm is a descent direction. That is,
Proof.
Now, by direct computation, we have
This completes the proof. □
4. Global Convergence
In this section, we investigate the global convergence property of DF-LSTT algorithm for solving (8). For this purpose, we make the following assumptions.
Assumption 1.
- A1.
- The mapping F is Lipschitz continuous, that is, there exists a constant such that
- A2.
- Xiao et al. [11] proved that, for the problem (8), F is monotone. That is,
Lemma 2.
Suppose that Assumption 1 holds. Let and be sequences generated by (11) and (12) in the DF-LSTT algorithm. Then, the following statements hold:
- 1.
- and are bounded.
- 2.
- 3.
Proof.
Note that Equation (20) is obtained from the line search. From (5), we get
where (22) and (23) follow from (21).
From (24), it is easy to see that the sequence is bounded. That is,
Furthermore, by (18), we have
Letting , then
By the monotonicity of property given in (19), we know that
Therefore, by Cauchy–Schwarz inequality, we have
where the last inequality can be implied from (21). Thus, it is easy to obtain that
which implies that is bounded. Using the continuity of F, we know that there exists a constant , such that
It follows from (23) that
Adding (29) for , we obtain
Equation (30) implies that
Hence, the second assertion holds.
Theorem 1.
Proof.
Suppose that conclusion (32) does not hold, that is, there exists such that
We note that the sequences , and are bounded from (25), (31), (27), and (28), respectively. In addition, from the Lipchitz continuity and (25), we have
On the other hand, from the definition of it holds that
Thus, by Cauchy–Schwartz inequality, we obtain
Therefore, from (14), it follows that
It follows from Lemma 1 that
5. Numerical Experiment
We present numerical experiments to show the efficiency of the DF-LSTT method. The experiments presented here are of two types. The first experiment involves applying the DF-LSTT method to solve the -norm regularization problem arising in compressive sensing. The second involves testing DF-LSTT in solving some given convex constrained nonlinear equations with different initial points and various dimensions. The implementations were performed using Matlab R2019b Update 1 (9.7.0.1216025, Mathwork Inc, Massachusetts, USA) on a HP PC (Hewlett-Packard, California, USA) with CPU 2.4 GHz, 8.0 GB RAM with the Windows 10 operating system.
5.1. Experiments on the -Norm Regularization Problem in Compressive Sensing
We begin by considering a typical compressive sensing scenario where the ultimate goal is to reconstruct a length-n sparse signal from m observations () with Gaussian noise, where the number of samples is dramatically smaller than the size of the original signal. We compare DF-LSTT with the CGD conjugate gradient method [12] and the PCG projection method [30] designed to solve nonlinear equations with convex constrained and signal recovery problems.
As a consequence of the limited memory of our PC, in this test, we considered a small size of signal with , and the original t contains randomly non-zero elements. Similar to [11,12,31], the quality of the restored signal is assessed by the mean of squared error (MSE) to the original signal t, that is,
where is the restored signal. In this test, we generate the random matrix A using the Matlab command rand(n,k). In addition, noise is appropriately added to the observed data computed by
where is the Gaussian noise with The DF-LSTT algorithm is implemented with the following parameters:
For the compared methods, their parameters are set as reported in their respective papers. In line with [32], we chose the parameter in the merit function as and the initial point for all algorithms also starts at . The process terminates when
where denotes the function value at Note, for this test, we only observe the convergence behavior of each method to obtain a similar accuracy solution.
In view of the plots depicted in Figure 1, DF-LSTT wins in decoding sparse signals in compressive sensing in terms of number of iterations for around 116 and around 0.69s. Table 1 contains the number of iterations, mean of Squared error (MSE), and time of the decoding of sparse signal for over 10 runs for the algorithms tested. The reported results in Figure 2 shows that, in decoding sparse signal in compressive sensing, DF-LSTT is faster than CGD and PCG with a clear lowest number of iterations.
Figure 1.
Illustration of the sparse signal recovery. From the top to the bottom is the original signal (First plot), the measurement (Second plot), and the reconstructed signals by DF-LSTT (Third plot), CGD (Fourth plot), and PCG (Fifth plot).
Table 1.
Results for sparse signal recovery.
Figure 2.
Comparison results of the DF-LSTT, CGD, and PCG algorithms with the signal parameter chosen as The x-axes represent the number of iterations (top left and bottom left) and the CPU time in seconds (top right and bottom right). The y-axes represent the MSE (top left and top right) and the function values (bottom left and right).
Next, the effectiveness and robustness of DF-LSTT algorithm is illustrated in an image de-blurring problem. We carried out our experiment using some widely used test images obtained from http://hlevkin.com/06testimages.htm. DF-LSTT is compared with the state-of-art methods CGD proposed by Xiaoh and Zhu [12], SGCS [11], and MFR [33].
The performance evaluation criteria to measure the quality of restoration by the methods are measured by the signal-to-noise ratio (SNR) defined as
the peak signal-to-noise ratio (PSNR) [34], and the structural similarity index (SSIM) [35]. For fairness in comparing the algorithms, the iteration process of all algorithms started at and terminates when . For the image de-blurring experiment, the following parameters were used in our implementation: . We tested several images including Tiffany , Lena , Barbara degraded by Gaussian blur and 10% Gaussian noise.
In Table 2, we report the performance for SNR, PSNR, and SSIM for DF-LSTT, CGD, SGCS, and the MFRM method in recovery blurred and noisy images. We can see that the SNR, PSNR, and SSIM of the test images calculated by the DF-LSTT algorithm are a bit higher than CGD, SGCS, and MFRM. The higher value of SNR, PSNR, and SSIM reflects better quality of restoration.
Table 2.
Numerical results of DF-LSTT, CGD, SGCS, and MFRM methods in image restoration.
Based on the performance reported in Table 2, the DF-LSTT algorithm restores blurred and noisy images quite well and obtain better quality reconstructed images in an efficient manner. Figure 3 and Figure 4 show the original and blurred images while Figure 5 shows the restored images by each method.
Figure 3.
The original test images: From (left), Tiffany, Lenna (middle), and Barbara (right).
Figure 4.
Blurred and noisy Barbara and Lenna test images.
Figure 5.
Restored images by DF-LSTT (left column), CGD (left middle column), SGCS (right middle column), and MFRM (right column).
5.2. Experiments on Some Large-Scaled Monotone Nonlinear Equations
In this subsection, we evaluate the performance of the proposed conjugate gradient method in solving nonlinear equations with convex constraints. We compare the proposed method with CGD [12] and PCG [30]. For each test problem, the stopping condition employed is
We also stop the algorithms when the iterations exceed 1000 without achieving convergence. The algorithms are tested using seven different initial points, one of which is randomly generated in . We ran the algorithms for several dimensions ranging from 1000 to 100,000. The parameters chosen for the proposed algorithm are; , , , . For the CGD and PCG algorithm, all parameters are chosen as in [12,30], respectively. We use the following well-known benchmark test functions where the mapping F is taken as
where the associated initial points for these problems are
Problem 1.
This problem is the Exponential function [36] with constraint set , that is,
Problem 2.
Modified Logarithmic function [36] with constraint set that is,
Problem 3.
The function [37] with defined by
Problem 4.
The Strictly convex function [38], with constraint set that is,
Problem 5.
Strictly convex function II [38], with constraint set that is,
Problem 6.
Tridiagonal Exponential function [39] with constraint set that is,
Problem 7.
Nonsmooth function [40] with constraint set
Problem 8.
The function with defined by
In Table 3, Table 4, Table 5, Table 6, Table 7, Table 8, Table 9 and Table 10, we report the computation results obtained from the implementation of DF-LSTT, CGD, and PCG algorithm. We report the number of iterations (ITER), the number of function evaluations (FVAL), and the CPU time in seconds (TIME).
Table 3.
Numerical test reports for the three tested methods for Problem 1.
Table 4.
Numerical test reports for the three tested methods for Problem 2.
Table 5.
Numerical test reports for the three tested methods for Problem 3.
Table 6.
Numerical test reports for the three tested methods for Problem 4.
Table 7.
Numerical test reports for the three tested methods for Problem 5.
Table 8.
Numerical test reports for the three tested methods for Problem 6.
Table 9.
Numerical test reports for the three tested methods for Problem 7.
Table 10.
Numerical test reports for the three tested methods for Problem 8.
We employ the widely used performance profile metric of Dolan and More [41] to compare the performance of the methods. The profile metric measures the ratio of each method by its computational outcome versus the computational outcome of the best presented method. The performance profile metric operates in the following manner. Let and denote the set of methods and test problems, respectively. In this section, we see a problem with different dimensions or different initial points as another new problem. For methods and problems, the performance profile is defined as follows: for each problem and for each method , they define
The performance ratio is given by Then, the performance profile is defined by
We note that is the probability for a method that is within a factor of the best possible ratio. Obviously, when takes certain value, a method with high value of is preferable or represents the best method. As usual, we obtain the number of iterations, number of function evaluations, and the computing time (CPU time) from the performance profile metric. Figure 6 and Figure 7 are the performance profile obtained based on the Dolan and Moré profile. Both figures indicate that the performances of the proposed method is competitive with the other methods. However, from both Figure 6 and Figure 7, DF-LSTT is more efficient because it was able to solve a higher percentage of the test problems with less number of iterations and function evaluations compared to CGD and PCG methods. However, with respect to CPU time, our algorithm did not perform well. This could be as a result of several computations involved in the method.
Figure 6.
Performance of compared methods relative to the number of iterations.
Figure 7.
Performance of compared methods relative to the number of function evaluations.
6. Conclusions
In this paper, we have presented a derivative-free conjugate gradient method for solving the -norm regularization problem by combining the projection technique with the proposed direction in [18]. Unlike the uniform convexity assumption used in establishing the convergence of the method in [18], our convergence was established under the monotonicity and Lipchitz continuity assumption. Our numerical experiments in recovering sparse signals and blurred images indicate efficient and robust behavior of the proposed algorithm. For instance, in recovering sparse signals, the proposed algorithm is faster than the compared algorithms. In addition, it exhibits less iterations and least mean squared error. Moreover, our algorithm is able to restore blurred and noisy images with better quality. This is reflected in the values of the SNR, PSNR, and SSIM. Furthermore, numerical experiments on a set of monotone problems with different initial points and dimensions were reported. Although, applying the proposed method in solving monotone operator equations, the proposed method does not perform well in terms of CPU time. This could be as a result of several computations involved in the method.
Author Contributions
Conceptualization, A.H.I.; Formal analysis, A.H.I.; Funding acquisition, P.K.; Methodology, J.A.; Project administration, P.K.; Resources, A.B.M.; Software, A.B.A. and J.A.; Supervision, P.K.; Writing–original draft, A.H.I.; Writing–review and editing, A.B.A. All authors have read and agreed to the published version of the manuscript.
Funding
This research was funded by Petchra Pra Jom Klao Scholarship, grant number 16/2561.
Acknowledgments
The authors are very grateful to Jinkui Liu of the University of Chongqing Three Gorges, Chognqing, China, for his kind offer of the source codes for the signal reconstruction problem. We would also like to thank the anonymous referees for their valuable comments. The authors acknowledge the support provided by the Theoretical and Computational Science (TaCS) Center under Computational and Applied Science for Smart research Innovation Cluster (CLASSIC), Faculty of Science, KMUTT. The first author was supported by the Petchra Pra Jom Klao Doctoral Scholarship, Academic for Ph.D. Program at KMUTT (Grant No.16/2561).
Conflicts of Interest
The authors declare no conflict of interest.
References
- Donoho, D.L. For most large underdetermined systems of linear equations the minimal ℓ1-norm solution is also the sparsest solution. Commun. Pure Appl. Math. 2006, 59, 797–829. [Google Scholar] [CrossRef]
- Lustig, M.; Donoho, D.L.; Santos, J.M.; Pauly, J.M. Compressed sensing. IEEE Trans. Inf. Theory 2006, 52, 1289–1306. [Google Scholar]
- Candes, E.; Romberg, J. Sparsity and incoherence in compressive sampling. Inverse Probl. 2007, 23, 969. [Google Scholar] [CrossRef]
- Daubechies, I.; Defrise, M.; De Mol, C. An iterative thresholding algorithm for linear inverse problems with a sparsity constraint. Commun. Pure Appl. Math. J. Issued Courant Inst. Math. Sci. 2004, 57, 1413–1457. [Google Scholar] [CrossRef]
- Beck, A.; Teboulle, M. A fast iterative shrinkage-thresholding algorithm for linear inverse problems. SIAM J. Imaging Sci. 2009, 2, 183–202. [Google Scholar] [CrossRef]
- Hale, E.T.; Yin, W.; Zhang, Y. A fixed-point continuation method for l1-regularized minimization with applications to compressed sensing. CAAM TR07-07 Rice Univ. 2007, 43, 44. [Google Scholar]
- Huang, S.; Wan, Z. A new nonmonotone spectral residual method for nonsmooth nonlinear equations. J. Comput. Appl. Math. 2017, 313, 82–101. [Google Scholar] [CrossRef]
- He, L.; Chang, T.C.; Osher, S. MR image reconstruction from sparse radial samples by using iterative refinement procedures. In Proceedings of the 13th Annual Meeting of ISMRM, Seattle, WA, USA, 6–12 May 2006; Volume 696. [Google Scholar]
- Moreau, J.J. Fonctions Convexes Duales et Points Proximaux dans un Espace Hilbertien. 1962. Available online: http://www.numdam.org/article/BSMF_1965__93__273_0.pdf (accessed on 26 February 2020).
- Figueiredo, M.A.; Nowak, R.D.; Wright, S.J. Gradient projection for sparse reconstruction: Application to compressed sensing and other inverse problems. IEEE J. Sel. Top. Signal Process. 2007, 1, 586–597. [Google Scholar] [CrossRef]
- Xiao, Y.; Wang, Q.; Hu, Q. Non-smooth equations based method for ℓ1-norm problems with applications to compressed sensing. Nonlinear Anal. Theory Methods Appl. 2011, 74, 3570–3577. [Google Scholar] [CrossRef]
- Xiao, Y.; Zhu, H. A conjugate gradient method to solve convex constrained monotone equations with applications in compressive sensing. J. Math. Anal. Appl. 2013, 405, 310–319. [Google Scholar] [CrossRef]
- Beale, E.M.L. A derivation of conjugate gradients. In Numerical Methods for Nonlinear Optimization; Lootsma, F.A., Ed.; Academic Press: London, UK, 1972. [Google Scholar]
- Nazareth, L. A conjugate direction algorithm without line searches. J. Optim. Theory Appl. 1977, 23, 373–387. [Google Scholar] [CrossRef]
- Zhang, L.; Zhou, W.; Li, D.H. A descent modified Polak–Ribière–Polyak conjugate gradient method and its global convergence. IMA J. Numer. Anal. 2006, 26, 629–640. [Google Scholar] [CrossRef]
- Andrei, N. On three-term conjugate gradient algorithms for unconstrained optimization. Appl. Math. Comput. 2013, 219, 6316–6327. [Google Scholar] [CrossRef]
- Liu, J.; Li, S. New three-term conjugate gradient method with guaranteed global convergence. Int. J. Comput. Math. 2014, 91, 1744–1754. [Google Scholar] [CrossRef]
- Tang, C.; Li, S.; Cui, Z. Least-squares-based three-term conjugate gradient methods. J. Inequalities Appl. 2020, 2020, 27. [Google Scholar] [CrossRef]
- Fletcher, R.; Reeves, C.M. Function minimization by conjugate gradients. Comput. J. 1964, 7, 149–154. [Google Scholar] [CrossRef]
- Solodov, M.V.; Svaiter, B.F. A new projection method for variational inequality problems. SIAM J. Control. Optim. 1999, 37, 765–776. [Google Scholar] [CrossRef]
- Liu, J.; Feng, Y. A derivative-free iterative method for nonlinear monotone equations with convex constraints. Numer. Algorithms 2018, 82, 245–262. [Google Scholar] [CrossRef]
- Liu, J.; Xu, J.; Zhang, L. Partially symmetrical derivative-free Liu–Storey projection method for convex constrained equations. Int. J. Comput. Math. 2019, 96, 1787–1798. [Google Scholar] [CrossRef]
- Ibrahim, A.H.; Garba, A.I.; Usman, H.; Abubakar, J.; Abubakar, A.B. Derivative-free RMIL conjugate gradient algorithm for convex constrained equations. Thai J. Math. 2019, 18, 212–232. [Google Scholar]
- Abubakar, A.B.; Rilwan, J.; Yimer, S.E.; Ibrahim, A.H.; Ahmed, I. Spectral three-term conjugate descent method for solving nonlinear monotone equations with convex constraints. Thai J. Math. 2020, 18, 501–517. [Google Scholar]
- Ibrahim, A.H.; Kumam, P.; Abubakar, A.B.; Jirakitpuwapat, W.; Abubakar, J. A hybrid conjugate gradient algorithm for constrained monotone equations with application in compressive sensing. Heliyon 2020, 6, e03466. [Google Scholar] [CrossRef] [PubMed]
- Abubakar, A.B.; Kumam, P.; Awwal, A.M. Global convergence via descent modified three-term conjugate gradient projection algorithm with applications to signal recovery. Results Appl. Math. 2019, 4, 100069. [Google Scholar] [CrossRef]
- Abubakar, A.B.; Kumam, P.; Awwal, A.M. An inexact conjugate gradient method for symmetric nonlinear equations. Comput. Math. Methods 2019, 1, e1065. [Google Scholar] [CrossRef]
- Pang, J.S. Inexact Newton methods for the nonlinear complementarity problem. Math. Program. 1986, 36, 54–71. [Google Scholar] [CrossRef]
- Zhou, W.; Li, D. Limited memory BFGS method for nonlinear monotone equations. J. Comput. Math. 2007, 25, 89–96. [Google Scholar]
- Liu, J.; Li, S. A projection method for convex constrained monotone nonlinear equations with applications. Comput. Math. Appl. 2015, 70, 2442–2453. [Google Scholar] [CrossRef]
- Wan, Z.; Guo, J.; Liu, J.; Liu, W. A modified spectral conjugate gradient projection method for signal recovery. Signal Image Video Process. 2018, 12, 1455–1462. [Google Scholar] [CrossRef]
- Kim, S.; Koh, K.; Lustig, M.; Boyd, S.; Gorinevsky, D. A method for large-scale ℓ1-regularized least squares. IEEE J. Sel. Top. Signal Process. 2007, 1, 606–617. [Google Scholar] [CrossRef]
- Abubakar, A.B.; Kumam, P.; Mohammad, H.; Awwal, A.M.; Sitthithakerngkiet, K. A Modified Fletcher–Reeves Conjugate Gradient Method for Monotone Nonlinear Equations with Some Applications. Mathematics 2019, 7, 745. [Google Scholar] [CrossRef]
- Bovik, A.C. Handbook of Image and Video Processing; Academic Press: Cambridge, MA, USA, 2010. [Google Scholar]
- Lajevardi, S.M. Structural similarity classifier for facial expression recognition. Signal Image Video Process. 2014, 8, 1103–1110. [Google Scholar] [CrossRef]
- La Cruz, W.; Martínez, J.; Raydan, M. Spectral residual method without gradient information for solving large-scale nonlinear systems of equations. Math. Comput. 2006, 75, 1429–1448. [Google Scholar] [CrossRef]
- La Cruz, W. A spectral algorithm for large-scale systems of nonlinear monotone equations. Numer. Algorithms 2017, 76, 1109–1130. [Google Scholar] [CrossRef]
- Wang, C.; Wang, Y.; Xu, C. A projection method for a system of nonlinear monotone equations with convex constraints. Math. Methods Oper. Res. 2007, 66, 33–46. [Google Scholar] [CrossRef]
- Bing, Y.; Lin, G. An efficient implementation of Merrill’s method for sparse or partially separable systems of nonlinear equations. SIAM J. Optim. 1991, 1, 206–221. [Google Scholar] [CrossRef]
- Yu, G.; Niu, S.; Ma, J. Multivariate spectral gradient projection method for nonlinear monotone equations with convex constraints. J. Ind. Manag. Optim. 2013, 9, 117–129. [Google Scholar] [CrossRef]
- Dolan, E.D.; Moré, J.J. Benchmarking optimization software with performance profiles. Math. Program. 2002, 91, 201–213. [Google Scholar] [CrossRef]
© 2020 by the authors. Licensee MDPI, Basel, Switzerland. This article is an open access article distributed under the terms and conditions of the Creative Commons Attribution (CC BY) license (http://creativecommons.org/licenses/by/4.0/).