Next Article in Journal
A New Gronwall–Bellman Inequality in Frame of Generalized Proportional Fractional Derivative
Next Article in Special Issue
An Efficient Conjugate Gradient Method for Convex Constrained Monotone Nonlinear Equations with Applications
Previous Article in Journal
New Polynomial Bounds for Jordan’s and Kober’s Inequalities Based on the Interpolation and Approximation Method
Previous Article in Special Issue
Calculating the Weighted Moore–Penrose Inverse by a High Order Iteration Scheme
 
 
Article
Peer-Review Record

A Modified Fletcher–Reeves Conjugate Gradient Method for Monotone Nonlinear Equations with Some Applications

Mathematics 2019, 7(8), 745; https://doi.org/10.3390/math7080745
by Auwal Bala Abubakar 1,2, Poom Kumam 1,3,4,*, Hassan Mohammad 2, Aliyu Muhammed Awwal 1,5 and Kanokwan Sitthithakerngkiet 6
Reviewer 1: Anonymous
Reviewer 2: Anonymous
Reviewer 3: Anonymous
Mathematics 2019, 7(8), 745; https://doi.org/10.3390/math7080745
Submission received: 24 June 2019 / Revised: 1 August 2019 / Accepted: 5 August 2019 / Published: 15 August 2019
(This article belongs to the Special Issue Iterative Methods for Solving Nonlinear Equations and Systems)

Round 1

Reviewer 1 Report

First of all, I want to say that any work that can provide a new efficient algorithm for solving a nonlinear inverse problem is important for consideration and should be shared with other specialists whose specialization is solving of applied inverse problems in practical applications. 


However, I must say that there are some major flaws in the present version of the paper which should be corrected before publication.


1. At the proposed method (Algorithm 2.3), the authors introduce some parameters \mu, \sigma, and \rho, which, in fact, are regularisation parameters. So the algorithm of its choosing should also be presented. Now, these parameters are only some heuristic parameters, incorrect choice of which could significantly influence the solution of the corresponding problem.


2. The numerical test problems are not representative (except Problem 6 and Problem 8) because all of them consist of equations with only one unknown variable. Moreover, Problem 8, that has a fully implicit nonlinear system of equations, shows that the proposed MFRM method does not have any notable advantages compared with already existed ACGD and PDY methods.


3. The numerical experiments on solving sparse signal problems are also not impressive because there are many other methods which can give much more appropriate results (for example, see [1] for the reconstruction of corresponding "cameraman" test image for  90% missing samples). 


Below, some minor flaws are highlighted.


1. It is better to assume that F: R^n \to R^m (where m >= n), because usually on practics we solve overdetermined problems.


2. Figures 1 and 3 have horizontal axes which do not correspond to captures to these pictures.


3. Description of the signal recovery problem has the parameter \beta that is contradicting to \beta introduced after formula (2).


4.  The introduction does not mention that the proposed method has the advantages of the direct methods [2] like the Boundary Control method proposed by M. Belishev (see, for, example [3]), the globally convergent method proposed by M. Klibanov (see, for, example [4])  and method based on the multidimensional analogs of Gelfand-Levitan-Krein equations proposed by S. Kabanikhin and M. Shishlenin (see, for, example [5,6]) .


References:


[1] A. Gholami and M.Hosseini. A balanced combination of Tikhonov and total variation regularizations for reconstruction of piecewise-smooth signals // Signal Processing. 2013. V. 93. P. 1945-1960.

[2] S. I. Kabanikhin. Definitions and examples of inverse and ill-posed problems // Journal of Inverse and Ill-Posed Problems. 2008. V. 16.  N. 4. P. 317–357.

[3] M. I. Belishev, Y. V. Kuryiev. Boundary control, wave field continuation and inverse problems for the wave-equation // Computers & Mathematics with Applications. 1991. V. 22. P. 27–52.

[4] L. Beilina, M. V. Klibanov. A globally convergent numerical method for a coefficient inverse problem // SIAM Journal on Scientific Computing. 2008. V. 31.  N. 1. P.  478–509.

[5] S. I. Kabanikhin, M. A. Shishleni. Boundary control and gel’fand-levitan-krein methods in inverse acoustic problem // Journal of Inverse and Ill-Posed Problems. 2004. V. 12. N. 2. P. 125–144.

[6] D. V. Lukyanenko, V. B. Grigorev, V. T. Volkov, M. A. Shishlenin. Solving of the coefficient inverse problem for a nonlinear singularly perturbed two-dimensional reaction-diffusion equation with the location of moving front data // Computers and Mathematics with Applications. 2019. V. 77. N. 5. P. 1245-1254. 


Author Response

Response and Corrections for Referee’s Report 1

Comments and Suggestions for Authors

First of all, I want to say that any work that can provide a new efficient algorithm for solving a nonlinear inverse problem is important for consideration and should be shared with other specialists whose specialization is solving of applied inverse problems in practical applications.

However, I must say that there are some major flaws in the present version of the paper which should be corrected before publication.

Comment 1: At the proposed method (Algorithm 2.3), the authors introduce some parametersμσ, and ρ, which, in fact, are regularisation parameters. So the algorithm of its choosing should also be presented. Now, these parameters are only some heuristic parameters, incorrect choice of which could significantly influence the solution of the corresponding problem.

Response: We have given a remark after Algorithm 2.3 to explain the reason we choose μ 0. As for ρ and σ, they are standard parameters which come from the derivation of the line search procedure.

Comment 2: The numerical test problems are not representative (except Problem 6 and Problem 8) because all of them consist of equations with only one unknown variable. Moreover, Problem 8, that has a fully implicit nonlinear system of equations, shows that the proposed MFRM method does not have any notable advantages compared with already existed ACGD and PDY methods.

Response: All the problems used in the experiments are standard benchmark problems used in this regard. The problems were obtained from references [1, 2, 5, 7, 8] which are reputable Journals. In addition, for fairness all the problems considered are problems considered either by ACGD or PDY.

Comment 3: The numerical experiments on solving sparse signal problems are also not impressive because there are many other methods which can give much more appropriate results (for example, see [1] for the reconstruction of corresponding "cameraman" test image for 90% missing samples).

Response: In our case, we are solving an l1-regularization problem by reformulating it into a nonlinear equation (which was shown by Xiao et al. [6] to be Lipschitz continuous and mono- tone). Our algorithm falls into the class of methods called first order approaches for solvingl1-regularization problems (See Figueiredo et al. [3]) while the method in [4] falls into the class of second order approaches. Furthermore, it was used to solve l2-regularization problem without reformulation into a nonlinear equation.

Below, some minor flaws are highlighted

Comment 1: It is better to assume that R→ Rm(wherem >n), because usually on practics we solve overdetermined problems.

Response: We have done that as suggested by the reviewer.

Comment 2: Figures 1 and 3 have horizontal axes which do not correspond to captures to these pictures.

Response: This is not clear to us, we need more explanation.
Comment 3: Description of the signal recovery problem has the parameter β that is contradicting

to β introduced after formula (2).
Response: In the description of the signal recovery, we have replaced the parameter 
β with γ to

avoid the contradiction.

Comment 4: The introduction does not mention that the proposed method has the advantages of the direct methods [2] like the Boundary Control method proposed by M. Belishev (see, for, example [3]), the globally convergent method proposed by M. Klibanov (see, for, example [4]) and method based on the multidimensional analogs of Gelfand-Levitan-Krein equations proposed by S. Kabanikhin and M. Shishlenin (see, for, example [5,6]) .

Response: We have mentioned these methods in the introduction as observed and suggested by the reviewer. This can be found at the second to the last paragraph of the introduction in the revised manuscript.


Reviewer 2 Report

In the paper authors present a modified Fletcher-Reeves conjugate gradient projection method for constrained monotone equations. The method possesses the sufficient descent property and its global convergence was proved using some appropriate assumptions. Two sets of numerical experiment were carried out to show the good performance of the proposed method compared with some existing ones. The first experiment was for solving monotone constrained nonlinear equations using some benchmark test problem while the second experiment is applying the method in signal and image recovery problems arising from compressive sensing. The paper needs a polishing in English. The list of the bibliography is to long like a survey. They may reduce it. I suggest accepting after minor modification.

Author Response

Response and Corrections for Referee’s Report 2

Comments and Suggestions for Authors In the paper authors present a modified Fletcher-Reeves conjugate gradient projection method for constrained monotone equations. The method possesses the sufficient descent property and its global convergence was proved using some appropriate assumptions. Two sets of numerical experiment were carried out to show the good performance of the proposed method compared with some existing ones. The first experiment was for solving monotone constrained nonlinear equations using some benchmark test problem while the second experiment is applying the method in signal and image recovery problems arising from compres- sive sensing. The paper needs a polishing in English. The list of the bibliography is to long like a survey. They may reduce it. I suggest accepting after minor modification.

Response: Thanks for the observation by the referee. We have reduced the number of the refer- ences from 54 to 44. Furthermore, we have done our best in polishing the English.


Reviewer 3 Report

The paper deals with a conjugate gradient method root-finding method for monotone nonlinear systems.The paper is, interesting, however, there are several issues that are not well explained and should be elaborated more.
The assumption on the system (1), namely that F is monotone, is very restrictive. E.g., do all the examples on image processing yield this kind of systems?This is not clear and should be discussed in more detailed. And if so, also an example where the mapping is not monotone should be added to show the limitations of the proposed method. In most of the application, non-linear systems are not monotone.
Section 2 needs some should be introduced a bit more. There is a huge logic jump (from line 80 to 81) which basically assumes the reader is very much familiar with [38]. I strongly recommend to add a paragraph introducing further that paper. For example, what is d, beta, theta? z_{k-1} is not defined at all...
I also recommend not to use forward pointer unless strictly necessary. For example, p.3, line 92 refers to Eq.(9) that appears a lot later in the paper and this is very disruptive.
I am also very much missing an illustration with one iteration of the Algorithm 2.3, e.g., showing a simple 2D system with these five steps described only using formulas. It is not mandatory, but it would greatly improve the understanding of the method.
The proposed method belongs to local root-finding methods that find a root from an initial guess. When looking for *all* roots within a domain, typically a box in R^m, subdivision solvers are frequently used. Several relevant references to be considered are:
van Sosin, B., & Elber, G. (2017). Solving piecewise polynomial constraint systems with decomposition and a subdivision-based solver. Computer-Aided Design, 90, 37-47.
Aizenshtein, M., Bartoň, M., & Elber, G. (2012). Global solutions of well-constrained transcendental systems using expression trees and a single solution test. Computer Aided Geometric Design, 29(5), 265-279.
Bartoň, M. (2011). Solving polynomial systems using no-root elimination blending schemes. Computer-Aided Design, 43(12), 1870-1878.
all being subdivision solvers that, in the proximity of a root e.g. when a single root is guaranteed inside a domain, use the Newton's method. But the subdivision solvers guarantee to find all roots within a prescribed numerical error. The proposed method is claimed to find 64 zeros in Fig.4, but how this is achieved? How the initial guesses are set? This is not clear, should be explained.
The numerical results consider the error of 10^-5 which is far above the standard thresholds (e.g. double float 10^-16). Why a higher precision is not considered? Is it a limitation of the method? If not, why the error is set so big? This issue must be clarified.
Another issue that deserves additional explanation is the choice of the initial guess. All these initial points were selected on the diagonal, why? What about initialization by points outside diagonal?
The results should be better discussed too; what is t and P(t) in Figs. 1 and 2? This is not said.
typos and minor issues:-line 23: unless if empty -> is,-line 224: kernals -> kernels,-page 19, caption of fig.6: full stop missing,-line 274: Levenberg-marquardt -> Marquardt.p, li { white-space: pre-wrap; }


Author Response

Response and Corrections for Referee’s Report 3

Comments and Suggestions for Authors The paper deals with a conjugate gradient method root-finding method for monotone nonlinear systems.The paper is, interesting, however, there are 

several issues that are not well explained and should be elaborated more.

Comment 1: The assumption on the system (1), namely that F is monotone, is very restrictive. E.g., do all the examples on image processing yield this kind of systems?This is not clear and should be discussed in more detailed. And if so, also an example where the mapping is not monotone should be added to show the limitations of the proposed method. In most of the application, non-linear systems are not monotone.

Response: The general form of the lregularization problem was reformulated to nonlinear system of equations that is monotone and Lipschitz continuous. The detailed was given in Section 4.1.

Comment 2: Section 2 needs some should be introduced a bit more. There is a huge logic jump (from line 80 to 81) which basically assumes the reader is very much familiar with [38]. I strongly recommend to add a paragraph introducing further that paper. For example, what is d, βθzk1is not defined at all...

Response: We have further introduced the paper [38] which is [43] in the revised manuscript.

Comment 3: I also recommend not to use forward pointer unless strictly necessary. For example, p.3, line 92 refers to Eq.(9) that appears a lot later in the paper and this is very disruptive.

Response: The forward pointer is removed.

Comment 4: I am also very much missing an illustration with one iteration of the Algorithm 2.3, e.g., showing a simple 2D system with these five steps described only using formulas. It is not mandatory, but it would greatly improve the understanding of the method.

Response: To implement Algorithm 2.3 in the manuscript, we consider the Lipschitz continuous and monotone function in Rgiven by

+1n,

F(x)= 1n,x

if 0,
if0
x1, (1)if 1,

where1is a vector with all entries equal to one, and the inequalities are meant component wise. Clearly, the solution vector of the above example is (1,1,...,1)T. Let 2, the steps of the algorithm is give below:

Step0:Setk=0, =(1,1)Tμ=0.01, σ=0.0001, ρ=0.9, γ=1andTol=105.05

Step1: F(x0)=(1,1F(x0)2>10 .
Step 2: 
dF(x0) = (1,1)T.
Step 3: Initialize 
0. Let α0.91. F(xα0d0)Td2,σα0F(xα0d0)∥∥d02.8284 × 104. This implies

So we choose α1.

Step 4: zxα0d= (0,0)TF(z0) = (1,1)T. Since F(z0)∥ Tol, then we compute the next iterate xPE(x− ζ0F(z0)) = 1015(0.222,0.222)T.

Step 5: Set 1 and repeat Step 1 − 4 until the stopping condition is satisfied.


Comment 5: The proposed method belongs to local root-finding methods that find a root from an initial guess. When looking for *all* roots within a domain, typically a box in Rm, subdivision solvers are frequently used. Several relevant references to be considered are: van Sosin, B., and Elber, G. (2017). Solving piecewise polynomial constraint systems with decomposition and a subdivision-based solver. Computer-Aided Design, 90, 37-47. Aizenshtein,M.,Bartonˇ,M.,andElber,G.(2012).Globalsolutionsofwell-constrainedtranscen- dental systems using expression trees and a single solution test. Computer Aided Geometric Design, 29(5), 265-279.

Bartonˇ, M. (2011). Solving polynomial systems using no-root elimination blending schemes. Computer-Aided Design, 43(12), 1870-1878. all being subdivision solvers that, in the proximity of a root e.g. when a single root is guaranteed inside a domain, use the Newton’s method. But the subdivision solvers guarantee to find all roots within a prescribed numerical error. The proposed method is claimed to find 64 zeros in Fig.4, but how this is achieved? How the initial guesses are set? This is not clear, should be explained.

Response: This is not what we mean. What we mean is that the original signal is in the form of a matrix which has 2randomly nonzero elements.

Comment 6: The numerical results consider the error of 10which is far above the standard thresholds (e.g. double float 1016). Why a higher precision is not considered? Is it a limitation of the method? If not, why the error is set so big? This issue must be clarified.

Response: We used the error of 10because both methods we compared with used same error. As for limitations, we are not sure because we have not used such error. Thanks for the observation.

Comment 7: Another issue that deserves additional explanation is the choice of the initial guess. All these initial points were selected on the diagonal, why? What about initialization by points outside diagonal?

Response: Both ACGD and PDY methods, their initial points were selected on the diagonal. So we feel is fair enough to use similar initial points.

Comment 8: The results should be better discussed too; what is and P(tin Figs. 1 and 2? This is not said.

Response: The explanation on P(tand are provided as suggested by the reviewer. This can be found just below the list of test problems in the revised manuscript.

typos and minor issues
Comment 1: line 23: unless if empty → is

Response: This correction has been effected.
Comment 2: line 224: kernals → kernels
Response: The correction is done.
Comment 3: page 19, caption of fig.6: full stop missing.
Response: The full stop has been inserted as observed.
Comment 4: line 274: Levenberg-marquardt → Marquardt.p, li white-space: pre-wrap; Response: This has been corrected as suggested.

Finally, we will to thank the anonymous reviewer’s for their valuable comments which help in improving the manuscript.


References

[1]  Y. Bing and G. Lin. An efficient implementation of merrills method for sparse or partially separable systems of nonlinear equations. SIAM Journal on Optimization, 1(2):206–221, 1991. doi: 10.1137/0801015. URL https://doi.org/10.1137/0801015.

[2]  Yanyun Ding, Yunhai Xiao, and Jianwei Li. A class of conjugate gradient methods for convex constrained monotone equations. Optimization, 66(12):2309–2328, 2017.

[3]  Mário AT Figueiredo, Robert D Nowak, and Stephen J Wright. Gradient projection for sparse reconstruction: Application to compressed sensing and other inverse problems. IEEE Journal of selected topics in signal processing, 1(4):586–597, 2007.

[4]  Ali Gholami and S. Mohammad Hosseini. A balanced combination of tikhonov and total variation regularizations for reconstruction of piecewise-smooth signals. Signal Processing, 93 (7):1945 – 1960, 2013. ISSN 0165-1684.

[5]  W.LaCruz,J.Martínez,andM.Raydan.Spectralresidualmethodwithoutgradientinformation for solving large-scale nonlinear systems of equations. Mathematics of Computation, 75(255): 1429–1448, 2006.

[6]  Yunhai Xiao, Qiuyu Wang, and Qingjie Hu. Non-smooth equations based method for l1-norm problems with applications to compressed sensing. Nonlinear Analysis: Theory, Methods & Applications, 74(11):3570–3577, 2011.

[7]  Z. Yu, J. Lin, J. Sun, Y. H. Xiao, L. Y. Liu, and Z. H. Li. Spectral gradient projection method for monotone nonlinear equations with convex constraints. Applied Numerical Mathematics, 59(10): 2416–2423, 2009.

[8]  W. J. Zhou and D. H. Li. A globally convergent BFGS method for nonlinear monotone equations without any merit functions. Mathematics of Computation, 77(264):2231–2240, 2008.






Round 2

Reviewer 1 Report

I have not any comments about the revised version of the manuscript. The manuscript has been improved and now warrants publication in Mathematics.

Author Response

TitleA modified Fletcher-Reeves conjugate gradient method for monotone nonlinear equa- tions with some applications

Manuscript Numbermathematics-544328
Authors: A.B. Abubakar , P. Kumam, H. Mohammad and A.M. Awwal

Submitted to Mathematics (Special issue: Iterative Methods for Solving Nonlinear Equations and Systems)June, 2019

Response and Corrections for Referee’s Report 1
Comment 
: I have not any comments about the revised version of the manuscript. The manuscript

has been improved and now warrants publication in Mathematics.

Response: Thank you for your comments which helps substantially in improving the quality of the manuscript.

Finally, we will to thank the anonymous reviewer’s for their valuable comments which help in improving the manuscript.


Author Response File: Author Response.pdf

Reviewer 3 Report

This is a revision of a paper that deals with a conjugate gradient root-finding method for monotone nonlinear systems. Moderate/major revisions were required by several reviewers in the first round. Unfortunately, I do not think that comments have been well addressed and that the paper has been sufficiently improved for publication.
Reviewer #1 asked for an algorithm to compute the regularizers, however, this is vaguely commented in the response letter and I do not see how this is addressed in the paper at all.
Another criticism that has not been sufficiently addressed is the numerical accuracy. The  results still use 10^-5 accuracy that is said, arguably enough, that is the standard threshold (also used in some other papers). I strongly disagree at this point and insist on an example showing performance of the algorithm for the double float (10^-16) accuracy. The standard computational error in most of the C/C++ codes is double float, not 10^-5.
Several relevant papers on global non-linear solvers have been suggested, but all have been completely ignored. These subdivision-based solvers are *global* methods that guarantee to find *all* roots inside a domain and within a very fine double-float accuracy, while the proposed approach is only a *local* method that looks only for the closest root. This fact should be clearly discussed in the introduction and a proper link to subdivision solvers should be made.
Another point that has not been clarified is the monotonicity of the system (1), namely how this very restrictive assumption is validated in all the examples on image processing. Are all the systems monotone? And if not, what is the limitation of the method? These questions have not been answered.
Overall, I think that the revision has not addressed the issues raised in the first round and the paper needs another round of revision before being considered for publication again.
p, li { white-space: pre-wrap; }


Author Response

TitleA modified Fletcher-Reeves conjugate gradient method for monotone nonlinear equa- tions with some applications

Manuscript Numbermathematics-544328
Authors: A.B. Abubakar , P. Kumam, H. Mohammad and A.M. Awwal

Submitted to Mathematics (Special issue: Iterative Methods for Solving Nonlinear Equations and Systems)June, 2019

Response and Corrections for Referee’s Report 3

Comments and Suggestions for Authors

This is a revision of a paper that deals with a conjugate gradient root-finding method for monotone nonlinear systems. Moderate/major revisions were required by several reviewers in the first round. Unfortunately, I do not think that comments have been well addressed and that the paper has been sufficiently improved for publication.

Comment 1: Reviewer 1 asked for an algorithm to compute the regularizers, however, this is vaguely commented in the response letter and I do not see how this is addressed in the paper at all.

Response: Reviewer 1 mentioned in his comments that we introduced the regularization parame- ters μσρ. However, we are not the first to introduce such parameters. The parameters σ and ρwere introduced by Solodov and Svaiter [1]. As for μ, to the best of our knowledge, it was first introduced by Yuan and Zhang [3] and we have give reasons for choosing μ 0. Basically, there is no any algorithm for choosing the parameters. What is done is to keep changing the values of the parameters (within their domain) and running the algorithm until you get the best parameter for your algorithm. If there is any algorithm for choosing the parameters, we are not aware of such algorithm for now.

Comment 2: Another criticism that has not been sufficiently addressed is the numerical accuracy. The results still use 10accuracy that is said, arguably enough, that is the standard threshold (also used in some other papers). I strongly disagree at this point and insist on an example showing performance of the algorithm for the double float (1016) accuracy. The standard computational error in most of the C/C++ codes is double float, not 105.

Response: An example showing the performance of the algorithm for double float (1016) accuracy was presented in Table 9 as suggested by the reviewer.

Comment 3: Several relevant papers on global non-linear solvers have been suggested, but all have been completely ignored. These subdivision-based solvers are *global* methods that guarantee to find *all* roots inside a domain and within a very fine double-float accuracy, while the proposed approach is only a *local* method that looks only for the closest root. This fact should be clearly discussed in the introduction and a proper link to subdivision solvers should be made.

Response: We have clearly discussed the suggestion by the reviewer in the introduction together

1

with a proper link to subdivision solvers.

Comment 4: Another point that has not been clarified is the monotonicity of the system (1), namely how this very restrictive assumption is validated in all the examples on image processing. Are all the systems monotone? And if not, what is the limitation of the method? These questions have not been answered. Overall, I think that the revision has not addressed the issues raised in the first round and the paper needs another round of revision before being considered for publication again.

Response: Not all systems are monotone (response to Are all the systems monotone?) but our algorithm can only handle those that are monotone (response to what is the limitation of the method?). We have mentioned (in the introducing section) that our proposed algorithm deals with monotone nonlinear equations, furthermore, in subsection 4.1 of the manuscript, we adopted the idea by [2] and transformed the lregularization problems to equivalent nonlinear monotone equation so that our proposed algorithm can handle such kind of problems.

Finally, we will to thank the anonymous reviewer for his valuable comments which help in improving the manuscript.

References

[1] Michael V Solodov and Benav F Svaiter. A globally convergent inexact newton method for systems of monotone equations. In Reformulation: Nonsmooth, Piecewise Smooth, Semismooth and Smoothing Methods, pages 355–369. Springer, 1998.

[2] Yunhai Xiao and Hong Zhu. A conjugate gradient method to solve convex constrained monotone equations with applications in compressive sensing. Journal of Mathematical Analysis and Applications, 405(1):310–319, 2013.

[3] Gonglin Yuan and Maojun Zhang. A three-term polak-rebiere-poylak conjugate gradient algorithm for large-scale nonlinear equations. Journal of Computational and Applied Mathematics, 286:186–195, 2015.


Author Response File: Author Response.pdf

Round 3

Reviewer 3 Report

The authors have addressed/clarified most of the issues raised in the previous rounds. Only very minor issues remain to be fixed.   The term "norm" used in the tables refers to the the error of the approximate and the exact solution. Does it refer to the Euclidean or l_1 norm? This is not clearly said. typos: - page 9: we haven -> have.

Author Response

Response and Corrections for Referee’s Report 3 Comments and Suggestions for Authors The authors have addressed/clarified most of the issues raised in the previous rounds. Only very minor issues remain to be fixed. Comment 1: The term "norm" used in the tables refers to the the error of the approximate and the exact solution. Does it refer to the Euclidean or l1 norm? This is not clearly said. Response: Yes it refers to the Euclidean norm. This is clearly mentioned in section 2 of the revised manuscript. Comment 2: typos: - page 9: we haven -> have. Response: This has been corrected. Thanks for the correction. Finally, we will to thank the anonymous reviewer for his valuable comments which help in improving the manuscript.

Author Response File: Author Response.pdf

Back to TopTop