Next Article in Journal
Iterative Solutions for the Nonlinear Heat Transfer Equation of a Convective-Radiative Annular Fin with Power Law Temperature-Dependent Thermal Properties
Previous Article in Journal
Symmetry-Adapted Domination Indices: The Enhanced Domination Sigma Index and Its Applications in QSPR Studies of Octane and Its Isomers
 
 
Font Type:
Arial Georgia Verdana
Font Size:
Aa Aa Aa
Line Spacing:
Column Width:
Background:
Article

A Family of Developed Hybrid Four-Term Conjugate Gradient Algorithms for Unconstrained Optimization with Applications in Image Restoration

1
Department of Mathematics, College of Science and Arts Sharourah, Najran University, P.O. Box 1988, Najran 68341, Saudi Arabia
2
Department of Mathematics & Computer Science, Faculty of Science, Alexandria University, Alexandria 5424041, Egypt
3
Educational Research and Development Center Sanaa, Sanaa 31220, Yemen
*
Author to whom correspondence should be addressed.
Symmetry 2023, 15(6), 1203; https://doi.org/10.3390/sym15061203
Submission received: 8 May 2023 / Revised: 31 May 2023 / Accepted: 1 June 2023 / Published: 4 June 2023
(This article belongs to the Section Mathematics)

Abstract

:
The most important advantage of conjugate gradient methods (CGs) is that these methods have low memory requirements and convergence speed. This paper contains two main parts that deal with two application problems, as follows. In the first part, three new parameters of the CG methods are designed and then combined by employing a convex combination. The search direction is a four-term hybrid form for modified classical CG methods with some newly proposed parameters. The result of this hybridization is the acquisition of a newly developed hybrid CGCG method containing four terms. The proposed CGCG has sufficient descent properties. The convergence analysis of the proposed method is considered under some reasonable conditions. A numerical investigation is carried out for an unconstrained optimization problem. The comparison between the newly suggested algorithm (CGCG) and five other classical CG algorithms shows that the new method is competitive with and in all statuses superior to the five methods in terms of efficiency reliability and effectiveness in solving large-scale, unconstrained optimization problems. The second main part of this paper discusses the image restoration problem. By using the adaptive median filter method, the noise in an image is detected, and then the corrupted pixels of the image are restored by using a new family of modified hybrid CG methods. This new family has four terms: the first is the negative gradient; the second one consists of either the HS-CG method or the HZ-CG method; and the third and fourth terms are taken from our proposed CGCG method. Additionally, a change in the size of the filter window plays a key role in improving the performance of this family of CG methods, according to the noise level. Four famous images (test problems) are used to examine the performance of the new family of modified hybrid CG methods. The outstanding clearness of the restored images indicates that the new family of modified hybrid CG methods has reliable efficiency and effectiveness in dealing with image restoration problems.

1. Introduction

CG parameters are widely utilized for dealing with a great variety of different optimization problems.
Many researchers have focused on modifying CG parameters in order to augment the corresponding CG strategies. Those newly proposed approaches have enhanced the performance of the conventional CG methods.
The CG algorithm has been applied numerous areas such as neural networks, image processing, medical science, operational research, engineering problems, etc.
Thus, the different problems generated in these applications can be formulated in various forms, such as unconstrained, constrained, multi-objective optimization problems, nonlinear systems, etc.
Therefore, this study concentrates on an unconstrained optimization problem defined as
min x R n f ( x ) ,
where the function f : R n R is continuously differentiable. CG methods have very powerful convergence abilities and fair storage needs.
Among the most important advantages of the CG method are its low memory requirements and convergence speed [1].
Thus, many authors have analyzed CG methods in order to solve large minimization optimization problems and other applications [2].
Accordingly, many authors have suggested several modificationsof the classical methods that deal with optimization problems, e.g, the Newton method [3], the quasi-Newton method [4], the semi-gradient method [5], the hybrid gradient meta-heuristic algorithm [6,7], and the CG-method [8,9,10,11].
Different versions of the Newton method are widely employed to solve multiple different optimization problems because of their fast convergence rates, e.g., [12,13,14,15,16,17].
However, these methods are very expensive since they require a calculation of the exact or approximate Jacobian matrix for each iteration. Therefore, it is best to avoid utilizing these methods when solving large-scale optimization problems.
Hence, the use of the CG technique has been more widespread compared to other conventional methods because of its simplicity, lower storage requirements, efficiency, and optimal convergence features [1,18,19].
Accordingly, the reason for selecting this topic for analysis depends on the following bases.
Conjugate gradient methods (CGs) are associated with a very strong global convergence theory for a local minimizer and they have low memory requirements. Moreover, in practice, combining the CG method with a line search strategy showed merit in dealing with an unconstrained minimization problem [20,21,22,23,24,25].
Additionally, CG parameters have shown remarkable superiority in solving problems involving systems of nonlinear equations (see, for example, [26,27,28,29,30,31,32,33,34,35]). According to previous successful uses of CG techniques to solve different applications problems, many authors have adapted CG methods such that they are capable of dealing with image restoration problems (see, for example, [25,35,36,37,38,39,40,41,42,43]).
Many researchers have shown that the CG method can also be used as a mathematical tool for training deep neural networks. (see, for example, [44,45,46,47,48]).
Generally, the iterative formula used to generate the sequence solutions of Problem (1) is defined in the following form
x k + 1 = x k + α k d k ,
where x k is the current point, α k > 0 is the step size obtained using a line search technique, and d k is the search direction decided by the conjugate parameter β k .
The sequence of the step size α k and the search direction d k can be generated through various approaches. These approaches depend on many concepts that have been implemented for designing different formulas of α k , d k , and β k .
The short and simple formula used to determinethe search direction is defined as follows:
d k = g k + β k d k 1 , d 0 = g 0 ,
where g k = g ( x k ) , β k is known as the CG parameter and g ( x k ) represents the gradient vector of the function f at x k .
A core difference between all the proposed CG methods is the form of the parameter β k .
The proposal of new CG parameters of the classical CG methods has led to improvements in said methods’ performance in dealing with many problems in different applications.
The classical CG methods include the HS-CG method proposed by the authors of [9]; the FR-CG parameter proposed by the authors of [8]; the PRP-CG method proposed by Polak and Ribiere [11], Polyak [49]; the LS-CG method proposed by Liu and Storey [10]; and the DY-CG method proposed by Dai and Yuan [50]. The β k value of these CG algorithms contains one term.
These CG parameters are listed in the following formulas:
β k H S = y k T g k + 1 d k T y k ,
where y k = g k + 1 g k .
β k F R = | | g k + 1 | | 2 | | g k | | 2 ,
β k P R P = y k T g k + 1 | | g k | | 2 ,
β k L S = g k + 1 T y k d k T g k ,
β k D Y = g k + 1 2 y k T d k ,
where . is the Euclidean norm and the values of the parameter y k are computed as follows: y k = g k + 1 g k .
Recently, many novel formulas for determining the parameter β k have been suggested, with those corresponding to CG methods including two or three terms (see [21,22,25,51,52])
In several papers, CG methods studied using convergence analysis have been reported (see [42,53]).
For example, Hager and Zhang [54] presented the following CG-method, which contains three terms:
β k H Z * = max { β H Z , r k }
where β k H Z = ( y k T g k ) ( d k 1 T y k ) 2 | | y k | | 2 ( d k 1 T g k ) ( d k 1 T y k ) 2 , r k = 1 | | d k 1 | | min { r , | | g k 1 | | } .  
In their numerical experiments, the authors of [54] set r = 0.01 .
If employed appropriately, remarkable results can be when obtained using the CG method to solve the many different problems posed in several applications.
Hence, the modifications and additions to and recommendations for conventional CG techniques are undertaken to develop an updated version of the CG method or a novel technique with widespread applications.
These propositions and modifications have either one term or multiple terms.
For example, Jiang et al. [25] recently proposed a family of combination 3-term CG techniques for solving nonlinear optimization problems and aiding image restoration.
The authors of [25] combined the parameter β k D Y with each parameter in the set { β k H S , β k P R P , β k L S } to obtain a family of CG methods. They define the direction as follows:
d k = g k for k = 1 , g k + 1 + ( 1 λ k ) β k n e w d k γ λ k ( g k + 1 T g k ) g k 2 g k Otherwise .
where 0 < γ < 1 and λ k = | g k + 1 T d k | g k + 1 d k . The authors of [25] used the convex combination technique developed by [35]. In addition to their proposal in Formula (10), they have suggested a new CG parameter, which was defined as follows:
β k N = ( g k + 1 T y k ) g k 2 ξ ( g k T d k ) ,
where 0 < ξ < 1 . Then, they combined their new parameter β k N with the parameter β k D Y to get a new algorithm that solves Problem (1) and can be used in image restoration; furthermore, they performed a convergence analysis of this family of combination 3-term CG methods.
Huo et al. [55] proposed a new CG method containing three parameters in order to solve Problem (1). Moreover, under reasonable assumptions, the author of [55] established the global convergence of their method.
Tian et al. [56] suggested a new hybrid three-term CG approach without using a line search method. The authors of [56] designed the new hybrid descent three-term direction in the following form:
d k + 1 N = g k + 1 + β k + 1 N s k σ k + 1 N y k , d 0 = g 0 ,
β k + 1 N = g k + 1 T ( y k γ s k ) max { y k T s k , g k 2 } ,
σ k + 1 N = g k + 1 T s k max { y k T s k , g k 2 } ,
where, when y k T s k = g k 2 = 0 is met, the stopping criterion is satisfied, and the optimal solution x * = x k is obtained. They proceeded to provide some remarks regarding their proposed directions. In addition, the authors of [56] successfully performed a convergence analysis of their newly proposed CG method (under certain conditions). The numerical outcomes demonstrate that the three-term CG method is more efficient and reliable in terms of solving large-scale unconstrained optimization problems compared to other methods.
Jian et al. [57] proposed a new spectral SCGM approach, i.e., the core study of the authors in [57], which consisted of designing a spectral parameter and a conjugate parameter using the SCGM method. They proved the global convergence of the suggested SCGM method. The approach proposed by the authors of [57] for yielding the spectral parameter is as follows:
ϕ k J Y J L L = 1 + | g k T d k 1 | g k 1 T d k 1 .
The authors clearly showed that ϕ k J Y L L 1 when the prior d k 1 is in a descent.
In addition, they proposed a new conjugate parameter, which was defined as follows:
β k J Y J L L = g k 2 ( g k T d k 1 ) 2 max { g k 1 2 , d k 1 T ( g k g k 1 ) } .
Subsequently, they presented a convergence-analysis-based proof of the SCGM method. Using the SCGM method, the authors of [57] solved an unconstrained optimization problem with many dimensions.
There are other studies that employ the same convex combination concept with different combination parameters (see, for example, [58]).
Yuan et al. [52] proposed a research direction of the CG method containing the following formula:
d k = g k + β k M H Z d k 1 , d 0 = g 0 ,
where the authors in [52] defined the parameter β k M H Z as follows:
β k M H Z = ( y k T g k ) ( d k 1 T y k ) 2 | | y k | | 2 ( d k 1 T g k ) max { σ y k 2 d k 2 , ( d k 1 T y k ) 2 } ,
where σ > 0.5 is a constant. The authors of [52] presented a convergence analysis proof of β k M H Z with some conditions.
Abubakar et al. [20] proposed the following CG-method (a new modified LS method (NMLS)):
d 0 = g 0 , d k = g k if g k T y k 1 0 , for k > 0 , d k otherwise , for k > 0 ,
where d k is defined as follows
d 0 = g 0 , d k = γ k g k + β k M L S d k if g k T d k 1 > 0 , for k > 0 , g k + β k L S d k otherwise , for k > 0 ,
where y k 1 = g k g k 1 , γ k = 1 + β k L S g k T d k 1 g k 2 and the parameter β k L S is defined by (7) and β k M L S is defined by
β k M L S = ( 1 g k T s k 1 g k T y k 1 ) β k L S t y k 1 2 ( g k T y k 1 ) , t 0 .
The convergence analysis of this method was carried out by Abubakar et al. [20]. In addition, the numerical results, which were obtained by solving the corresponding unconstrained minimization problems, were implemented in applications in motion control
Alhawarat et al. [59] proposed a new CG method that depends on a convex group between two various search directions of the CG method; then, they employed this method to solve an unconstrained optimization problem involving image restoration. Their proposed method presented incredible results. The numerical results of the method proposed in [59] show that the convex combination allows the new CG method to provide more efficient results than the other compared methods.
Therefore, the convex combination technique plays a critical role in improving the performance of any new version of a CG method. The above arguments indicate that merging two or more parameters of the classical CG methods offers promising results.
Accordingly, such promising results encouraged us to continue this pattern of improving the performance of the classical CG techniques.
Many modifications and performance enhancements of the CG parameters have recently been proposed by numerous authors. Such recently developed GC parameters include two terms: the first one denotes the negative gradient vector, while the second term indicates the parameter proposed by the cited authors (see, for example, [21,22,25].
A brief review of the ideas presented in the literature has inspired us to design a new convex hybrid CG technique. This new convex combined technique has four terms, the first of which is a negative gradient vector, while the other three terms are the proposed parameters multiplied by d k .
Therefore, the major contributions of this paper include the following aspects.
  • The presentation of some suggestions for and modifications to some classical CG parameters with new parameters;
  • Three new parameters β k N j for CG techniques, which are are combined together, where j = 1 , 2 , 3 ;
  • Through the above procedures, a new suggested CG approach has been designed, which we dubbed “Convex Group Conjugated Gradient”, which has been shortened to CGCG;
  • The performance of a convergence analysis of the new CGCG approach;
  • Numerical investigations are offered, which were executed by solving a set of test minimization problems;
  • The adaptation of the CGCG algorithm yielded a family of modified CG methods that can be used to deal with image restoration problems;
  • By applying the adaptive median filter method, the noise in an image is detected;
  • According to the noise level in the corrupted image, a change in the size of the filter window is considered for improving the performance of this family of CG methods;
  • Four corrupted images (test problems) with salt-and-pepper noise (at 30–90% levels) are used to examine the performance of the new family of modified hybrid CG methods.
The rest of the current study is arranged as follows. In the next section, the CGCG method and convergence analysis proof are presented. Section 3 presents the numerical experiments concerning the set of optimization test problems solved using the CGCG method and five other traditional CG methods.
Section 4 offers a brief discussion about image processing, including some applications of the adaptive CGCG method (a member of the family of CG methods) in image restoration. Section 5 presents the numerical experiments conducted with the proposed family of CG methods, which concern the solution to an image restoration problem. The last section provides some concluding remarks.

2. CGCG Method

The proposed CGCG method involves modifications to and suggestions for several classical CG-methods, which are listed in (4)–(8). In addition, this CGCG method contains new parameters, which are presented in this paper.
Therefore, the details of the newly proposed CGCG-method are listed in Algorithm 1 and used to solve Problem (1). The convergence analysis proof of the CGCG algorithm is also presented in this section. In addition, the CGCG method is adapted and modified in order to yield the family of CG-methods listed in Algorithms 2–4. These algorithms can be used in image restoration problems, as we will see in Section 4.
Algorithm 1 A Convex Group Conjugated Gradient Method (CGCG).
Input:  f : R n R , f C 1 , δ ( 0 , 0.5 ) and σ ( δ , 1 ) , k = 0 , an initial point x k R n and ε > 0 .
Output:  x * is the optimal solution.
  1: Set d 0 = g 0 and k : = 0 .
  2: while  g k + 1   > ε . do
  3:    Compute α k to fulfill (31) and (32).
  4:    Calculate a new point x k + 1 = x k + α k d k .
  5:    Compute f k = f ( x k + 1 ) , g k + 1 = g ( x k + 1 )
  6:    Set k = k + 1 .
  7: end while
  8: return  x * , the optimal solution.
Algorithm 2 CGCG-HS2 Algorithm
Step 1: Inputs: Original image (O_img), Noise Ratio(N.R) = { 30 % , 50 % , 70 % , 90 % } , α = 100 and W_max = { 3 × 3 , 5 × 5 , 7 × 7 , 9 × 9 } .
Step 2: Destroy the original (O_img) by using noise (salt and pepper) to obtain a noised image (N_img).
Step 3: Apply adaptive median filter algorithm (A.M.F .A) for each level and W_max.
Step 4: Detect pixels of the noised image (N_img).
Step 5: Use Formulas (53) and (55) to remove the Noise from the corrupted pixels.
Step 6: Output: Repaired Image (R_img) = (O_img)
Algorithm 3 CGCG-HS1 Algorithm
Step 1: Inputs: Original image (O_img), Noise Ratio(N.R) = { 30 % , 50 % , 70 % , 90 % } , α = { 500 , 350 } and W_max = 5 × 5 .
Step 2: Destroy the original (O_img) using a high noise level (salt and pepper) to obtain a noised image (N_img).
Step 2a: If N.R < 60 % .
Step 2b: Set α = 500 .
Step 2b: Otherwise, set α = 350 .
Step 3: Apply adaptive median filter algorithm (A.M.F .A) for each level W_max = 5 × 5 .
Step 4: Detect noisy pixels in the noised image (N_img).
Step 5: Use Formulas (53) and (55) to remove the noise from the corrupted pixels.
Step 6: Output: Repaired Image (R_img) = (O_img)
Algorithm 4 CGCG-HZ Algorithm
Step 1: Inputs: Original image (O_img), Noise Ratio(N.R) = { 30 % , 50 % , 70 % , 90 % } , α = { 500 , 350 } and W_max = 5 × 5 .
Step 2: Destroy the original (O_img) using a high noise level (salt and pepper) to obtain a noised image (N_img).
Step 2a: If N.R < 60 % .
Step 2b: Set α = 500 .
Step 2b: Otherwise, set α = 350 .
Step 3: Apply adaptive median filter algorithm (A.M.F .A) for each level with W_max = 5 × 5 .
Step 4: Noisy pixels of the Noise image (N_img) are detected.
Step 5: Use Formulas (53) and (56) to remove the Noise from the corrupted pixels.
Step 6: Output: Repaired Image (R_img) = (O_img)
Thus, to solve Problem (1), the next iterative form is utilized to generate a new candidate solution:
x k + 1 = x k + α k d k ,
where x k is the current point and α k is the step size in the search direction d k .
The new proposed CG approach is defined in the following paragraph.
To achieve the global convergence of a hybrid CG technique, one must choose the step length α k carefully. Additionally, the selection of a suitable search direction d k must be considered.
The purpose of this procedure is to guarantee that the following sufficient descent condition is satisfied:
g k + 1 T d k + 1 C g k + 1 2 ,
for C 0 .
Consequently, the following fundamental property is called an angle property. This property and other proposed parameters are critical in developing the proposed method:
c o s θ k = g k + 1 T d k g k + 1 d k ,
where the angle θ k is between d k and g k + 1 .
Hence, we benefit by obtaining the value of θ k , which will be used to design a new mixed CG method containing four terms. The three terms that represent the core of our proposed method are inspired by the classical conjugate gradient parameters, which are defined in Formulas (4)–(8).
Therefore, we connect the three terms using the following parameter:
λ k = | g k + 1 T d k | g k + 1 d k
In addition, we define the following parameter to represent an option for the CG parameter.
χ k = | τ ^ λ k τ | 2 n ¯ ,
where τ , τ ^ are fixed real numbers such that 0 < τ ^ < τ < 1 and n ¯ 2 is an integer.
Then, the three CG parameters are defined as follows:
β k N 1 = max { 0 , L a k } ,
where
L a k = ( y k T g k + 1 ) ( d k T y k ) 2 | | y k | | 2 ( d k T g k + 1 ) ( d k T y k ) 2 ,
β k N 2 = max χ k g k + 1 2 ( g k + 1 T d k ) , L b k ,
where
L b k = ( y k T g k + 1 ) ( d k T y k ) | | y k | | 2 ( g k + 1 T d k ) r d k 2 y k 2 ,
where 0.5 r < is a fixed number,
β k N 3 = max { χ k , L c k } ,
where
χ k = 0 if g k + 1 T d k < 0 , χ k otherwise ,
L c k = 2 y k 2 ( g k + 1 T d k ) s k + 1 n ( f + ε ) ( d k T y k ) 2 ,
where f = | f k + 1 f k | , 0 < ε < 1 , s k + 1 = x k + 1 x k at the iteration k and n indicates the number of variables of a test problem.
According to the above relations and formulas, the new search direction is defined as follows:
d k + 1 = g k + 1 + β k N 1 d k + ( 1 λ k ) β k N 2 d k λ k β k N 3 d k , d 0 = g 0 ,
Thus, the newly proposed CG approach includes four terms; we name this suggested method “Convex Group Conjugated Gradient”, which is abbreviated as (CGCG).
To render the CGCG method globally convergent, we use the following line search technique:
f ( x k + α k d k ) f ( x k ) + δ α k g k T d k ,
and
g ( x k + α k d k ) T d k σ g k T d k ,
where δ ( 0 , 0.5 ) and σ ( δ , 1 ) are constants.
According to the search direction (30) and the Wolfe conditions (31) and (32), we present the explicit steps of the (CGCG) technique, as follows.
Now, we present some numerical facts relating to the above parameters, allowing us to discuss the global convergence and descent properties of the CGCG method.
The purpose of the following remark is to facilitate the convergence analysis proof of the CGCG method.
Remark 1. 
When both sides of (30) are multiplied by the g k + 1 T , we obtain the following:
g k + 1 T d k + 1 = g k + 1 2 + β k N 1 ( g k + 1 T d k ) + ( 1 λ k ) β k N 2 ( g k + 1 T d k ) λ k β k N 3 ( g k + 1 T d k ) .
Then, the fourth term of (33) satisfies the following inequality for each iteration k:
λ k β k N 3 ( g k + 1 T d k ) 0 .
Since 0 λ k 1 , and if the first branch of (28) is satisfied, then β k N 3 ( g k + 1 T d k ) = 0 ; otherwise, β k N 3 ( g k + 1 T d k ) > 0 since L c k χ k and it is clear that 0 < χ k < 1 4 .
The third term of (33) can be reformulated as follows:
( 1 λ k ) β k N 2 ( g k + 1 T d k ) = ( 1 λ k ) χ k g k + 1 2 if χ k g k + 1 2 g k + 1 T d k L b k , ( 1 λ k ) ( y k T g k + 1 ) ( d k T y k ) ( g k + 1 T d k ) | | y k | | 2 ( g k + 1 T d k ) 2 r d k 2 y k 2 otherwise ,
The second term of (33) can be reformulated as follows:
β k N 1 ( g k + 1 T d k ) = 0 if L a k 0 , ( y k T g k + 1 ) ( d k T y k ) ( g k + 1 T d k ) 2 | | y k | | 2 ( g k + 1 T d k ) 2 ( d k T y k ) 2 otherwise ,
We set the second term of Formula (35) as follows:
M u = ( 1 λ k ) ( y k T g k + 1 ) ( d k T y k ) ( g k + 1 T d k ) | | y k | | 2 ( g k + 1 T d k ) 2 r d k 2 y k 2 .
The following inequality u T v 1 2 ( | | u | | 2 + v 2 ) is applied to the first expression of the M u numerator, with u = d k g k + 1 T y k , v = y k g k + 1 T d k .
Then,
M u ( 1 λ k ) 1 2 d k 2 g k + 1 2 y k 2 + y k 2 ( g k + 1 T d k ) 2 2 y k 2 ( g k + 1 T d k ) 2 r d k 2 y k 2 ( 1 λ k ) g k + 1 2 2 r
then,
M u ( 1 λ k ) g k + 1 2 2 r .
Similarly, the second branch of Formula (36) is rewritten as follows:
M u ^ = ( y k T g k + 1 ) ( d k T y k ) ( g k + 1 T d k ) 2 y k 2 ( g k + 1 T d k ) 2 ( d k y k ) 2 1 2 g k + 1 2 ( d k T y k ) 2 3 y k 2 ( g k + 1 T d k ) 2 ( d k T y k ) 2 1 2 g k + 1 2 .
Therefore,
M u ^ 1 2 g k + 1 2 .
Note: As we have mentioned above, the four results that are listed in Formulas (34)–(38) of Remark 1 are applied in the convergence analysis of the suggested method described in Section 2.
The convergence analysis and descent property of the CGCG technique are presented in the following section.

2.1. Convergence Analysis

The convergence analysis of Algorithm 1 is presented as follows. First, two useful hypotheses are listed as follows.
Hypothesis 1. 
Let a function f ( x ) be continuously differentiable.
Hypothesis 2. 
In a few neighborhoods N of the level group, 𝓁 = { x n : f ( x ) f ( x 0 ) } , the gradient vector g ( x ) of the function f ( x ) is Lipschitz-continuous.
Therefore, there is a fixed L < that satisfies the following inequality g ( x ) g ( y ) L x y , x , y N .
The following lemma shows that the search direction d k + 1 defined by Formula (30) is a descent direction.
Lemma 1. 
Let { x k } be the sequence obtained using Algorithm 1. If d k T y k 0 ; then,
g k + 1 T d k + 1 C g k + 1 2 ,
where  0 C < 1 .
Proof. 
If k = 0 , it follows from (30) that g 1 T d 1 = g 1 2 < 0 , for k 1 , proving that (39) is true.
According to (34) and Remark 1, the following cases exist that can be used to prove that Formula (39) is true.
Case I: When β k N 1 = 0 and β k N 2 = L b k and based on (37), then Formula (33) is as follows
g k + 1 T d k + 1 g k + 1 2 + ( 1 λ k ) L b k ( g k + 1 T d k ) g k + 1 2 + ( 1 λ k ) g k + 1 2 2 r g k + 1 2 ( 1 + 1 2 r ) ,
we set C = 1 + 1 2 r .
Then,
g k + 1 T d k + 1 C g k + 1 2 ,
where 0 C < 1 .
Therefore, (39) is met.
Case II: When β k N 1 = 0 and β k N 2 = χ k g k + 1 2 ( g k + 1 T d k ) , then Formula (33) is as follows.
g k + 1 T d k + 1 g k + 1 2 + ( 1 λ k ) χ k g k + 1 2 g k + 1 2 ( 1 + χ k ) ,
we set C = 1 + χ k , with 0 < χ k < 1 4 . Then,
g k + 1 T d k + 1 C g k + 1 2 ,
where 0 < C < 1 . Therefore, (39) is met.
Case III: When β k N 1 = L a k and β k N 2 = χ k g k + 1 2 ( g k + 1 T d k ) , and based on (38), then Formula (33) is as follows:
g k + 1 T d k + 1 g k + 1 2 + 1 2 g k + 1 2 + ( 1 λ k ) χ k g k + 1 2 g k + 1 2 ( 1 2 + χ k ) ,
we set C = 1 2 + χ k where 0 < χ k < 1 4 .
Then,
g k + 1 T d k + 1 C g k + 1 2 ,
where 0 < C < 1 ; thus, (39) is met.
Case IV: When β k N 1 = L a k and β k N 2 = L b k , and based on Formulas (37) and (38), then Formula (33) is as follows:
g k + 1 T d k + 1 g k + 1 2 + 1 2 g k + 1 2 + 1 2 r ( 1 λ k ) g k + 1 2 g k + 1 2 ( 1 2 + 1 2 r ) ,
we set C = 1 2 + 1 2 r .
Then,
g k + 1 T d k + 1 C g k + 1 2
where 0 C < 1 ; thus, (39) is met.
Hence, the above four cases show that (39) is true. □
Below, some hypotheses and a useful lemma are presented, where the obtained result was essentially proven by the author of [60] and the author of [61,62].
Lemma 2. 
Let x 0 be a starting point that satisfies Hypothesis 1. Regard any algorithm to be of Form (18), such that the d k satisfies (19) and the α k satisfies conditions (31) and (32).
Hence, the following inequality is met.
k = 0 ( g k T d k ) 2 d k 2 <
Theorem 1. 
Suppose that Hypotheses 1 and 2 are met and that the following is satisfied: k = 0 1 d k 2 = + . Then, the sequence { g k + 1 } generated using the CGCGC method satisfies the next result.
lim k inf g k + 1 = 0 ,
Proof. 
Proof by contradiction: assume that (45) is incorrect; hence, for some ξ > 0 .
Then, the next result is correct.
g k + 1 ξ .
According to (46) and (19), the following result is obtained
g k + 1 T d k + 1 C g k + 1 2 ξ 2 ,
and then
( g k + 1 T d k + 1 ) d k + 1 ξ 2 d k + 1 ;
( g k + 1 T d k + 1 ) 2 d k + 1 2 ξ 4 d k + 1 2 ,
By summing the final expression, we obtain
k = 0 ( g k + 1 T d k + 1 ) 2 d k + 1 2 k = 0 ξ 4 d k + 1 2 = .
Since (48) is in contradiction with (44), (45) is true as long as k . □

Computational Cost Analysis of CGCG Method

In general, any modified version of the conjugate gradient method has low memory requirements and convergence speed [1] but converges much faster than the steepest descent method [19,63].
Since the quasi-Newton methods engender a need to keep a n × n matrix H k (the inverse of the approximate Hessian B k ) or L k (the Cholesky factor of B k ) in a computer’s memory, a Quasi-Newton method needs O ( n 2 ) data units [64].
On the other hand, by using the CGCG method, we can compute the values of the gradient vector only; therefore, the CGCG method has an O ( n ) memory requirement, i.e., O ( n ) complexity for each iteration.

3. Numerical Investigations

Numerical investigations are applied to test the effectiveness and robustness of the CGCG approach by solving a set of unconstrained optimization problems.
Therefore, the performance of Algorithm 1 (CGCG) is tested by solving the 76 benchmark test problems with variable dimensions between 100 to 800,000. These test problems were taken from [65,66].
These test problems have recently been used by many authors to test the efficiency and effectiveness of their proposed algorithms (see, for example, [25,58]).
These test problems are processed on a personal computer (a laptop) with an Intel(R) Core(TM) i5-3230M [email protected] and 2.60 GHz with 4.00 GB of RAM using the Windows 10 operating system.
This CGCG method, which is listed in Algorithm 1, is proposed to obtain a new set of CG methods, as the CGCG method contains four terms.
The performance of the CGCG method is examined in this section. Therefore, 76 test problems are used to complete this task. In addition, five classical CG methods are used to solve these 76 test problems. These five classical CG methods are the HS-CG method defined by (4), the FR-CG method defined by (5), the LS-CG method defined by (7), the DY-CG method defined by (8), and the HZ-CG method defined by (9).
The numerical results of all six CG methods are documented in Table 1, Table 2 and Table 3.
Next, the performance of all six algorithms is compared. The six methods are programmed in the MATLAB language (version 8.5.0.197613 (R2015a)).
The numerical comparisons present the advantages and disadvantages of each of the six algorithms. Therefore, three criteria are used to evaluate the performance of all six algorithms. These three criteria are “Itr”, denoting the number of iterations; “EFs”, denoting the functional evaluations; and “CPUT”, denoting the time.
We use the following termination form when running the six algorithms
ε = 10.00 × 10 6 ,
or Itr > 3000 , i.e., if the number of iterations exceeds 3000.
Accordingly, when g ( x k ) 2 ε or Itr 3000 , the algorithm stops. The standard for determining the success of the algorithm in solving a test problem is as follows. If Itr 3000 and g ( x k ) 2 ε , the algorithm has succeeded in solving the problem; otherwise, it has failed to solve the problem, which is denoted by “F” as shown in Table 1, Table 2 and Table 3.
Moreover, to display the test results clearly, we use a performance profile tool developed by Dolan and Moré [67]. More details about the performance profile and how it is used can be found in [67,68,69,70].
The significant feature of the performance profile constitutes its presentation of all results that are listed on one table in one figure by plotting an accumulative distribution function ρ s ( τ ) for the many algorithms.
The numerical outcomes recorded in Table 1, Table 2 and Table 3 and all three graphs, which are depicted in Figure 1 and Figure 2, display the performance of all six algorithms.
The results in Table 1 present the number of iterations (Itr) for all six CG parameters. It is clear that the CGCG algorithm (Algorithm 1) was capable of solving all the test problems (76 benchmark test problems).
Hence, the CGCG method satisfies the following condition (success standard): Itr < 3000 and g ( x k ) 2 ε , for all test problems. The second-best performance was exhibited by the HZ method.
According to the graph on the left of Figure 1 with τ = 10 , the success rates of the six algorithms are ordered as follows: 100 % , 97 % , 68 % , 89 % , 54 % , and 74 % , for CGCG, HZ, DY, LS, FR, and HS, respectively.
The seam notice for the remaining Table 2 and Table 3 for the FEs and the CPU time, respectively.
The performance of the six algorithms examined via performance profiles is shown in Figure 1 and Figure 2. By utilizing the performance profile theory, we generated three performance levels, which are shown Figure 1 and Figure 2.
These two figures are based on the results listed in Table 1, Table 2 and Table 3 for the Itr, EFs, and CPU time for all 76 test problems, respectively.
It is clear from the two figures that the CGCG algorithm has the characteristics of efficiency, reliability, and effectiveness in solving the 76 test problems compared to the other five methods.
The performance of the six algorithms can be summarized by referencing Figure 1 and Figure 2, as follows. The performance of the six methods with respect to the Itr criterion is shown as follows.
At τ = 1 , the success rates of the 6 algorithms are arranged as follows: 75 % , 41 % , 34 % , 26 % , 14 % , and 13 % for the LS, CGCG, HZ, HS, FR, and DY methods, respectively, corresponding to the left graph in Figure 1.
At τ = 2 , the success rates of the six algorithms are ordered as follows: 92 % , 83 % , 83 % , 58 % , 49 % , and 28 % for the CGCG, HZ, LS, HS, DY, and FR, respectively, as shown in the left graph in Figure 1.
At τ = 3 , the success rates of the six algorithms are ordered as follows: 96 % , 89 % , 84 % , 64 % , 58 % and 39 % for the CGCG, HZ, LS, HS, DY, and FR, respectively, as shown in the left graph in Figure 1.
At τ = 4 , the success rates of the six algorithms are ordered as follows: 97 % , 93 % , 86 % , 68 % , 61 % , and 45 % for the CGCG, HZ, LS, HS, DY, and FR methods, respectively, shown in the left graph in Figure 1.
At τ = 5 , the success rates of the six algorithms are ordered as follows: 99 % , 97 % , 87 % , 70 % , 62 % , and 47 % for the CGCG, HZ, LS, HS, DY, and FR methods, respectively, as shown in the left graph of Figure 1.
At τ = 6 , the success rates of the six algorithms are ordered as follows: 99 % , 97 % , 88 % , 71 % , 66 % and 47 % for the CGCG, HZ, LS, HS, DY, and FR methods, respectively, as shown in the left graph in Figure 1.
At τ = 7 , the success rates of the six algorithms are ordered as follows: 99 % , 97 % , 88 % , 72 % , 67 % , and 50 % for the CGCG, HZ, LS, HS, DY, and FR methods, respectively, as shown in the left graph in Figure 1.
At τ = 8 , the success rates of the six algorithms are ordered as follows: 99 % , 97 % , 88 % , 74 % , 67 % , and 51 % for the CGCG, HZ, LS, HS, DY, and FR methods, respectively, as shown in the left graph in Figure 1.
At τ = 9 , the success rates of the six algorithms are ordered as follows: 100 % , 97 % , 89 % , 74 % , 68 % , and 53 % for the CGCG, HZ, LS, HS, DY, and FR methods, respectively, as shown in the left graph in Figure 1.
At τ = 10 , the success rates of the six algorithms are ordered as follows: 100 % , 97 % , 89 % , 74 % , 68 % , and 54 % for the CGCG, HZ, LS, HS, DY, and FR methods, respectively, as shown in the left graph in Figure 1.
The performance of the six methods with respect to the FEs criterion is shown as follows.
At τ = 1 , the success rates of the six algorithms are ordered as follows: 36 % , 30 % , 28 % , 25 % , 8 % , and 8 % , for the CGCG, HS, HZ, DY, LS, and FR, methods, respectively, as shown in the right graph of Figure 1.
At τ = 2 , the success rates of the six algorithms are ordered as follows: 84 % , 79 % , 55 % , 54 % , 33 % , and 20 % , for the CGCG, HZ, DY, HS, FR, and LS methods, respectively, in the right-hand graph in Figure 1.
At τ = 3 , the success rates of the six algorithms are ordered as follows: 92 % , 92 % , 64 % , 61 % , 39 % and 38 % , for the CGCG, HZ, HS, DY, FR and LS, for the right graph of Figure 1.
At τ = 4 , the success rates of the six algorithms are ordered as follows: 99 % , 96 % , 66 % , 63 % , 61 % and 45 % , for the CGCG, HZ, HS, DY, LS and FR for the right graph of Figure 1.
At τ = 5 , the success rates of the six algorithms are ordered as follows: 99 % , 96 % , 68 % , 67 % , 64 % and 45 % , for the CGCG, HZ, HS, LS, DY and FR for the right graph of Figure 1.
At τ = 6 , the success rates of the six algorithms are ordered as follows: 99 % , 97 % , 74 % , 70 % , 64 % and 47 % , for the CGCG, HZ, LS, HS, DY and FR for the right graph of Figure 1.
At τ = 7 , the success rates of the six algorithms are ordered as follows: 99 % , 97 % , 76 % , 70 % , 66 % and 51 % , for the CGCG, HZ, LS, HS, DY and FR for the right graph of Figure 1.
At τ = 8 , the success rates of the six algorithms are ordered as follows: 100 % , 97 % , 82 % , 72 % , 66 % and 53 % , for the CGCG, HZ, LS, HS, DY and FR for the right graph of Figure 1.
At τ = 9 , the success rates of the six algorithms are ordered as follows: 100 % , 97 % , 83 % , 72 % , 66 % and 53 % , for the CGCG, HZ, LS, HS, DY and FR for the right graph of Figure 1.
At τ = 10 , the success rates of the six algorithms are ordered as follows: 100 % , 97 % , 83 % , 72 % , 67 % , 53 % , for the CGCG, HZ, LS, HS, DY, and FR for the right graph of Figure 1.
In addition, for Figure 2 and at τ = 10 , the success rates of the six algorithms are ordered as follows 100 % , 97 % , 68 % , 84 % , 55 % and 72 % for the CGCG, HZ, DY, LS, FR, and HS methods, respectively.
In general, all the graphs show that the curve of the function ρ s ( τ ) is almost maximal for the CGCG method. Therefore, the comparison results indicate that the CGCG approach is competitive with, and in all cases superior to, the five other CG methods in terms of efficiency, reliability, and effectiveness with regard to solving the set of test problems.

4. Image Processing

This section constitutes the second part of our numerical experiments. This section aims to render the CGCG algorithm capable of dealing with image-processing problems.
Images are often corrupted by various sources (factors) that may be responsible for the introduction of an artifact in a photo.
Therefore, the number of degraded pixels in a photo corresponds to the amount of noise that has been introduced into the image.
The main sources of noise in a digital image are as follows: [71,72].
  • Some environmental conditions may impact the efficiency of an imaging sensor during image acquisition.
  • Noise may be introduced into an image through an inappropriate sensor temperature and low light levels.
  • In addition, an image can deteriorated (corrupted) due to interference in the transmission channel.
  • Similarly, the image may be deteriorated (corrupted) due to dust particles that may exist on the scanner screen.
Images are often deteriorated by batch fuzz due to noisy sensors or transmission channels that lead to the corruption of a number of pixels in the picture.
A batch fuzz is one of the numerous popular noise samples in which only a portion of the pixels is degraded, i.e., the information of the pixels is entirely lost.
Generally, different photos with several implementations require treatment using a fine noise funnel technique to obtain credible effects and thus restore the original picture.
To restore an original photo that has been deteriorated by batch noise, a two-phase scheme is used, in which the first phase entails determining the fuzz-affected pixels in the corrupted photo, which is executed using the adaptive median filter algorithm [73,74,75,76,77].
In the first stage, the adaptive median filter algorithm determines the noise in the corrupted image, as follows. The filter compares each pixel in the distorted image to the surrounding pixels. If one of the pixel values varies significantly from the majority of the surrounding pixels, the pixel is treated as noise. More details about the adaptive median filter algorithm and salt-and-pepper noise removal by median type are available in [73,74,75,76,78,79].
The second phase of the scheme involves restoring the original image by using any algorithm that solves optimization problems.
Chan et al. [73] have applied the two-phase scheme to restore a corrupted image.
Many authors have used these two phases to render CG algorithms cable of restoring images corrupted by impulse noise (see, for example [25,42,43]).
The two-phase scheme can be briefly described as follows.
By applying the first stage, we use the adaptive median filter to select the corrupted pixels [73].
In the second phase, assume that the corrupted photo, denoted by δ , has a size of τ by ϱ and δ I = { 1 , 2 , , τ } × { 1 , 2 , , ϱ } , constituting the index group of the photo δ .
The set δ I denotes the set of indices of the noise pixels detected from the first phase, and | | is the number of elements of .
Let V i , j be the set of the four closest neighbors of the pixel at pixel location ( i , j ) δ I , i.e., let V i , j = { ( i , j 1 ) , ( i , j + 1 ) , ( i 1 , j ) , ( i + 1 , j ) } , and y i , j be the observed pixel value (gray level) of the photo at pixel location ( i , j ) .
Therefore, in the second stage, the noise from the corrupted pixels is removed by solving the following non-smooth problem:
min u ( i , j ) | u i , j y i , j | + β 2 S i , j + S ˜ i , j ,
where S i , j = ( n , m ) V i , j N φ α ( u i , j y n , m ) , S ˜ = ( n , m ) V i , j φ α ( u i , j y n , m ) , and φ is a potential edge-preserving function.
Some of these functions were defined in [73], as follows.
φ α ( t ) = | t | α for 1 < α 2 , α + t 2 for α > 0 .
In this paper, we use the second branch of (51), employing different values of the α , and u = [ u i , j ] i , j is a column vector of length | | ordered lexicographically.
However, it is time consuming and costly to determine the minimizer point of a non-smooth problem (50) exactly.
The authors of [80] canceled the non-smooth term and presented the subsequent smooth unconstrained optimization problem:
min u F α ( u ) : = ( i , j ) 2 ( i , j ) V i , j φ α ( u i , j y i , j ) + ( m , n ) V i , j φ α ( u i , j u m , n ) .
Clearly, the greater the fuzz ratio, the greater the size of (52).
Cai et al. [80] revealed that degraded pictures may be repaired efficiently by utilizing the CG parameters to find the minimizer of Problem (52); however, in reality, the fuzz ratio is heightened or actually corresponds to 90 % .
Newly proposed algorithms of CG parameters for solving this problem (image restoration) can be found in [25,42,81].
Now, we concentrate on using the two-stage procedure to clear salt-and-pepper fuzz that represents a particular state of batch noise.
The adaptive median filter approach that is described by the authors of [76] will be used as the first stage to detect noisy pixels.
Therefore, the CGCG method is modified to obtain three CGCG algorithms, which are adapted to solve Problem (52); hence, this case represents the second stage.
The following iterative form is used to generate the candidate solutions of Problem (52).
u k + 1 = u k + α k d d ,
where α k represents the step size computed by a line search method, and d k is the search direction, which is defined as follows.
d k + 1 = g k + 1 + β k d k + ( 1 λ k ) β k N 2 d k λ k β N 3 d k ,
where β k { β H S , β H Z * } .
This combination allows for the acquisition of integrated algorithms that inherit the features of the parameters defined in Formulas (4), (9), (25), and (27).
Therefore, the two Formulas (53) and (54) can be run iteratively through one of the following cases.
Case 1: If β k = β H S , then the CGCG-SH algorithm is used.
Case 2: If β k = β H Z * , then the CGCG-HZ algorithm is used.
The selected size of the filter window for the adaptive median filter method has a key role in the phases uncovering the noisy pixels in the corrupted image.
Chan et al. [73] presented several volumes of the filter window (W) as noted by the fuzz level as follows:
When the noise level is less than 25 % , then the maximum of the window is W m a x × W m a x = 5 × 5 . When the noise level is between 25 % and 40 % , W m a x × W m a x = 7 × 7 ; if the noise level is between 40 % and 60 % , W m a x × W m a x = 9 × 9 ; if the noise level is between 60 % and 70 % , W m a x × W m a x = 13 × 13 ; if the noise level is between 70 % and 80 % , W m a x × W m a x = 17 × 17 ; if the noise level is between 80 % and 85 % , W m a x × W m a x = 25 × 25 ; and if the noise level is between 85 % and 90 % , W m a x × W m a x = 39 × 39 .
In addition, the value of α in Problem (52) may be significant for finding the minimizer of Problem (52). Cai et al. [80] set α = 100 .
Accordingly, we suggest two diffident values of the filter window (W) and the parameter α , as follows.
Procedure: A We set W m a x × W m a x = 5 × 5 , and if the noise ratio is less than or equal to 60 % , then we set α = 500 . Otherwise we set α = 350 .
Consequently, by using Procedure: A and both Case 1 and Case 2, we design two algorithms for solving Problem (52). These tow algorithms are the CGCG-HS1 algorithm and CGCG-HZ algorithm; the research directions of the two algorithms are defined by
d k + 1 = g k + 1 + β H S d k + ( 1 λ k ) β k N 2 d k λ k β N 3 d k ,
and
d k + 1 = g k + 1 + β H Z * d k + ( 1 λ k ) β k N 2 d k λ k β N 3 d k ,
respectively.
Procedure: B If the noise ratio is equal to 30 % , we set W m a x × W m a x = 3 × 3 ; if the noise ratio equals 50 % , we set W m a x × W m a x = 5 × 5 ; if the noise ratio is equal to 70 % , we set W m a x × W m a x = 7 × 7 ; and if the noise ratio is equal to 90 % , we set W m a x × W m a x = 9 × 9 . Additionally, we set α = 100 for any noise level.
According to Procedure: B, we used Formula (55) to solve all images problems; this algorithm is abbreviated as CGCG-HS2.
We assess the performance of these three incorporated algorithms in solving image restoration problems by utilizing the peak signal-to-noise ratio [82].
The peak signal-to-fuzz proportion (PSNR) is defined as follows:
P S N R = 10 l o g 10 255 2 1 N M ( i , j ) x ( i , j ) r e s x ( i , j ) o r i 2 ,
where x ( i , j ) r e s and x ( i , j ) o r i are the pixel values of the fixed picture and the original one, respectively.
The examined pictures are Lena ( 512 × 512 ), Hill ( 512 × 512 ), Man ( 512 × 512 ), and Boat ( 512 × 512 ).
The stopping standard used is defined by the following conditions.
If one of the two conditions is met, I t r > 200 or
F α ( U k F α ( U k 1 ) ) F α U k 10 4 .
The above procedures can be summarized as the following algorithms.
For further clarification of the working mechanism of the three proposed algorithms listed in Algorithms 2–4, the steps applied are shown in Figure 3.
In addition, Figure 4 depicts a graphical abstract of the operational scheme of the three proposed algorithms with real examples.

5. Numerical Results

In this section, we use the same operating environment that is used in Section 3.
Therefore, three criteria are used in this section; these standards are the Itr CPU time (Tcpu) and the PSNR values for the restored images listed in Table 4.
In addition, the graphs of the original, noisy, and repaired pictures are shown in Figure 5, Figure 6, Figure 7 and Figure 8.
The noise levels of the salt-and-pepper noise are as follows: 30 % , 50 % , 70 % , and 90 % .
The fifth row of Table 4 offers the totals of the three standards, i.e., Itr, Tcpu, and PSNR, for the repaired pictures. These pictures were processed by three algorithms: CGCG-HS1, CGCG-HZ, and CGCG-HS2.
These three criteria show that the CGCG-HS1 algorithm has achieved the best results for the PSNR compared to the CGCG-HS2 and CGCG-HZ algorithms.
With respect to time (Tcpu) and the number of iterations (Itr), the CGCG-HZ algorithm is the best, as it restores all corrupted images of all noises levels within 441.33 s with 288 iterations versus 525.62 s and 324 iterations achieved by applying the CGCG-HS1 algorithm and 525.738 s and 332 iterations achieved using CGCG-HS2 method. However, the CGCG-HS1 algorithm is the best for PSNR, as it its time span is 479.58 against 479.02 and 479.35 for CGCG-HZ and CGCG-HS2, respectively.
In general, the performance of the CGCG-HS1 and CGCG-HZ algorithms is better than the performance of the CGCG-HS2 algorithm. This means that the value of the parameter α plays a key role in minimizing the objective function F α ( u ) in (52); additionally, the size of the filter window in the adaptive median filter method is very significant when scanning a deteriorated image. The outstanding clarity of the restored images proves the new family of modified hybrid CG methods can be used to solve many problems in different applications.

6. Conclusions and Future Work

A novel four-term CG parameter has been designed and tested in this study. The newly proposed algorithm is abbreviated as CGCG, which stands for “Convex Group Conjugated Gradient”.
The CGCG algorithm solved an unconstrained optimization problem.
The convergence analysis proof of the CGCG approach has been shown. Through solving a set 76 benchmark test problems using the six algorithms, the numerical results indicate that the CGCG algorithm outperforms the five classical CG algorithms. The CGCG algorithm has been adapted into a new family of modified CG methods used to deal with image restoration problems. The median filter method was first applied to detect the noise in a corrupted image. According to the noise level in the corrupted image, we changed the size of the filter window to improve the performance of this proposed family of CG methods. The salt-and-pepper technique was used to corrupt an image as a test problem, where levels of noise are 30–90%. Therefore, four well-known images (test problems) were corrupted with salt-and-pepper noise (at levels from 30 to 90%) to examine the performance of the new family of modified hybrid CG methods.
The superior clarity of the restored images indicates that the new family of modified hybrid CG methods has efficiency, reliability, and effectiveness in dealing with image restoration problems.
Therefore, this new family of modified hybrid CG methods can be adapted to deal with other problems in different applications. In future research, it will be valuable to combine the CGCG method with a meta-heuristic technique in order to possess the features of both techniques. Through this hybridization, we will obtain a hybrid CG meta-heuristic algorithm for dealing with global optimization problems, including unconstrained, constrained, and multi-objective problems.
It is also useful to develop the CGCG approach to deal with a nonlinear system of monotone equations.
Furthermore, the CGCG approach could be modified to deal with the symmetric system of nonlinear equations, image restoration, and deep neural networks used for trains.

7. List of Test Problem

These test problems were taken from [65,66].
COSINE function (CUTE): Pr = 1–3
f ( x ) = i = 1 n 1 c o s ( 0.5 x i + 1 + x i 2 ) , x 0 = [ 1 , , 1 ]
DIXMAAN function (CUTE): Pr = 4–26
f ( x ) = 1 + i = 1 n α x i 2 i n k 1 + i = 1 n 1 β x i 2 ( x i + 1 + x i + 1 2 ) i n k 2 + i = 1 2 m γ x i 2 x i + m 4 i n k 3 + i = 1 m δ x i x i + 2 m i n k 4 , m = n / 3 ,
α β γ δ k 1 k 2 k 3 k 4
A100.1250.1250000
B10.06250.06250.06250001
C10.1250.1250.1250000
D10.260.260.260000
E100.1250.1251001
F10.06250.06250.06251001
G10.1250.1250.1251001
H10.260.260.261001
I100.1250.1252002
J10.06250.06250.06252002
K10.1250.1250.1252002
L10.260.260.262002
where x 0 = [ 2 , , 2 ] .
DIXON3DQ function (CUTE): Pr = 27
f ( x ) = ( x 1 1 ) 2 + i = 1 n 1 ( x i x i + 1 ) 2 + ( x n 1 ) 2 , x 0 = [ 1 , , 1 ]
DQDRTIC function (CUTE): Pr = 28–31
f ( x ) = i = 1 n 2 ( x i 2 + c x i + 1 2 + d x i + 2 2 ) , c = 100 , d = 100 , x 0 = [ 3 , , 3 ] .
EDENSCH function (CUTE): Pr = 32–34
f ( x ) = 16 + i = 1 n 1 ( x i 2 ) 4 + ( x i x i + 1 2 x i + 1 ) 2 + ( x i + 1 + 1 ) 2 , x 0 = [ 0 , , 0 ] .
EG2 function (CUTE): Pr = 35
f ( x ) = i = 1 n 1 s i n ( x 1 + x i 2 1 ) + 0.5 s i n ( x n 2 ) , x 0 = [ 1 , , 1 ] .
FLETCHCR function (CUTE): Pr = 36–38
f ( x ) = i = 1 n 1 c ( x i + 1 x i + 1 x i 2 ) 2 , c = 100 , x 0 = [ 0 , , 0 ] .
HIMMELBG function (CUTE): Pr = 39–40
f ( x ) = i = 1 n / 2 ( 2 x 2 i 1 2 + 3 x 2 i 2 ) e x p ( x 2 i 1 x 2 i ) , x 0 = [ 1.5 , , 1.5 ] .
LIARWHD function (CUTE): Pr = 41–42
f ( x ) = i = 1 n 4 ( x 1 + x i 2 ) 2 + i = 1 n ( x 1 1 ) 2 , x 0 = [ 4 , , 4 ] .
Extended Penalty function: Pr = 43–44
f ( x ) = i = 1 n 1 ( x i 1 ) 2 + i = 1 n x i 2 0.25 2 , x 0 = [ 1 , 2 , , n ] .
QUARTC function (CUTE): Pr = 45–47
f ( x ) = i = 1 n ( x i 1 ) 4 , x 0 = [ 2 , , 2 ] .
TRIDIA function (CUTE): Pr = 48–49
f ( x ) = ( x 1 1 ) 2 + i = 1 n i ( 2 x i x i 1 ) 2 , x 0 = [ 1 , , 1 ] .
Extended Wood function: Pr = 50
f ( x ) = i = 1 n ( 100 ( x 4 i 2 x 4 i 3 2 ) 2 + ( 1 x 4 i 3 ) 2 + 90 ( x 4 i x 4 i 1 2 ) 2 + ( 1 x 4 i 1 ) 2 + 10 ( x 4 i 2 + x 4 i 2 ) 2 + 0.1 ( x 4 i 2 x 4 i ) 2 ) ,
x 0 = [ 3 , 1 , 3 , 1 , 3 , 1 , , 3 , 1 ] .
BDEXP function (CUTE): Pr = 51–53
f ( x ) = i = 1 n ( x i + x i + 1 ) e x p ( x i + 2 ( x i + x i + 1 ) ) , x 0 = [ 1 , , 1 ] .
BIGGSB 1 function (CUTE): Pr = 54
f ( x ) = ( x 1 1 ) 2 + i = 1 n 1 ( x i + 1 x i ) 2 + ( 1 x n ) 2 , x 0 = [ 0 , , 0 ] .
SINE function: Pr = 55–57
f ( x ) = i = 1 n 1 s i n ( 0.5 x i + 1 + x i 2 ) , x 0 = [ 1 , , 1 ] .
FLETCBV3 function (CUTE): Pr = 58
f ( x ) = p 2 ( x 1 + x n ) 2 i = 1 n 1 p 2 ( x i x i + 1 ) 2 i = 1 n p ( h 2 + 2 ) h 2 x i + c p h 2 c o s ( x i ) ,
where p = 1 / 10 8 , h = 1 / ( 1 + n ) , c = 1 and x 0 = [ h , 2 h , n h ] .
NONSCOMP function (CUTE): Pr = 59–60
f ( x ) = ( x 1 1 ) 2 + i = 1 n 4 ( x i x i 1 2 ) 2 , x 0 = [ 3 , 3 , , 3 ] .
POWER function (CUTE): Pr = 61
f ( x ) = i = 1 n ( i x i ) 2 , x 0 = [ 1 , 1 , , 1 ] .
Raydan 1 function: Pr = 62–63
f ( x ) = i = 1 n i 10 ( e x p ( x i ) x i ) , x 0 = [ 1 , 1 , , 1 ] .
Raydan 2 function: Pr = 64–66
f ( x ) = i = 1 n ( e x p ( x i ) x i ) , x 0 = [ 1 , 1 , , 1 ] .
Diagonal 1 function: Pr = 67–68
f ( x ) = i = 1 n ( e x p ( x i ) i x i ) , x 0 = [ 1 / n , 1 / n , , 1 / n ] .
Diagonal 2 function: Pr = 69–70
f ( x ) = i = 1 n ( e x p ( x i ) x i i ) , x 0 = [ 1 / 1 , 1 / 2 , , 1 / n ] .
Diagonal 3 function: Pr = 71–72
f ( x ) = i = 1 n e x p ( x i ) i s i n ( x i ) , x 0 = [ 1 , , 1 ] .
Extended Rosenbrock function: Pr = 73–74
f ( x ) = i = 1 n / 2 100 x 2 i x 2 i 1 2 2 + ( 1 x 2 i 1 ) 2 , x 0 = [ 1.2 , 1 , , 1.2 , 1 ] .
TRIDIA function (CUTE): Pr = 75–76
f ( x ) = ( x 1 1 ) 2 + i = 2 n i ( 2 x i x i 1 ) 2 , x 0 = [ 1 , , 1 ] .

Author Contributions

Conceptualization, E.A.; methodology, E.A. and S.M.; software, E.A. and S.M.; validation, E.A.; formal analysis, E.A.; investigation, E.A.; Funding acquisition, E.A.; data curation, E.A.; writing—original draft preparation, E.A. and S.M.; writing—review and editing E.A. and S.M.; visualization, E.A. and S.M.; supervision, E.A. and S.M. All authors have read and agreed to the published version of the manuscript.

Funding

This research was funded by the Deanship of Scientific Research at Najran University (grant number NU/DRP/SERC/12/3).

Data Availability Statement

Not applicable.

Acknowledgments

The authors are thankful to the Deanship of Scientific Research at Najran University for funding this work under the Research Priorities and Najran Research funding program grant number (NU/DRP/SERC/12/3).

Conflicts of Interest

The authors declare no conflict of interest.

References

  1. Hager, W.W.; Zhang, H. Algorithm 851: Cg_descent, a conjugate gradient method with guaranteed descent. ACM Trans. Math. Softw. 2006, 32, 113–137. [Google Scholar] [CrossRef]
  2. Andrei, N. Conjugate Gradient Methods; Springer International Publishing: Cham, Switzerland, 2022; pp. 169–260. ISBN 978-3-031-08720-2. [Google Scholar]
  3. Brown, P.N.; Saad, Y. Convergence theory of nonlinear newton—Krylov algorithms. SIAM J. Optim. 1994, 4, 297–330. [Google Scholar] [CrossRef]
  4. Li, D.-H.; Fukushima, M. A modified bfgs method and its global convergence in nonconvex minimization. J. Comput. Appl. Math. 2001, 129, 15–35. [Google Scholar] [CrossRef] [Green Version]
  5. Ali, E.; Mahdi, S. Adaptive hybrid mixed two-point step size gradient algorithm for solving non-linear systems. Mathematics 2023, 11, 2102. [Google Scholar] [CrossRef]
  6. Alnowibet, K.A.; Mahdi, S.; El-Alem, M.; Abdelawwad, M.; Mohamed, A.W. Guided hybrid modified simulated annealing algorithm for solving constrained global optimization problems. Mathematics 2022, 10, 1312. [Google Scholar] [CrossRef]
  7. L-Alem, M.E.; Aboutahoun, A.; Mahdi, S. Hybrid gradient simulated annealing algorithm for finding the global optimal of a nonlinear unconstrained optimization problem. Soft Comput. 2020, 25, 2325–2350. [Google Scholar] [CrossRef]
  8. Fletcher, R.; Reeves, C.M. Function minimization by conjugate gradients. Comput. J. 1964, 7, 149–154. [Google Scholar] [CrossRef] [Green Version]
  9. Hestenes, M.R.; Stiefel, E. Methods of conjugate gradients for solving. J. Res. Natl. Bur. Stand. 1952, 49, 409. [Google Scholar] [CrossRef]
  10. Liu, Y.; Storey, C. Efficient generalized conjugate gradient algorithms, part 1: Theory. J. Optim. Theory Appl. 1991, 69, 129–137. [Google Scholar] [CrossRef]
  11. Polak, E.; Ribiere, G. Note sur la convergence de méthodes de directions conjuguées. ESAIM Math. Model. Numer. Anal. Model. Math. Anal. Numer. 1969, 3, 35–43. [Google Scholar] [CrossRef]
  12. Argyros, I.K. Convergence and Applications of Newton-Type Iterations; Springer Science & Business Media: Berlin/Heidelberg, Germany, 2008. [Google Scholar]
  13. Birgin, E.G.; Krejić, N.; Martínez, J.M. Globally convergent inexact quasi-newton methods for solving nonlinear systems. Numer. Algorithms 2003, 32, 249–260. [Google Scholar] [CrossRef]
  14. Li, D.; Fukushima, M. A globally and superlinearly convergent gauss—Newton-based bfgs method for symmetric nonlinear equations. SIAM J. Numer. Anal. 1999, 37, 152–172. [Google Scholar] [CrossRef]
  15. Li, D.-H.; Fukushima, M. A derivative-free line search and global convergence of broyden-like method for nonlinear equations. Optim. Methods Softw. 2000, 13, 181–201. [Google Scholar] [CrossRef]
  16. Solodov, M.V.; Svaiter, B.F. A globally convergent inexact newton method for systems of monotone equations. In Reformulation: Nonsmooth, Piecewise Smooth, Semismooth and Smoothing Methods; Springer: Berlin/Heidelberg, Germany, 1998; pp. 355–369. [Google Scholar]
  17. Zhou, G.; Toh, K.-C. Superlinear convergence of a newton-type algorithm for monotone equations. J. Optim. Theory Appl. 2005, 125, 205–221. [Google Scholar] [CrossRef]
  18. Golub, G.H.; O’Leary, D.P. Some history of the conjugate gradient and lanczos algorithms: 1948–1976. SIAM Rev. 1989, 31, 50–102. [Google Scholar] [CrossRef]
  19. Hager, W.W.; Zhang, H. A survey of nonlinear conjugate gradient methods. Pac. J. Optim. 2006, 2, 35–58. [Google Scholar]
  20. Abubakar, A.B.; Malik, M.; Kumam, P.; Mohammad, H.; Sun, M.; Ibrahim, A.H.; Kiri, A.I. A liu-storey-type conjugate gradient method for unconstrained minimization problem with application in motion control. J. King Saud Univ.-Sci. 2022, 34, 101923. [Google Scholar] [CrossRef]
  21. Alnowibet, K.A.; Mahdi, S.; Alshamrani, A.M.; Sallam, K.M.; Mohamed, A.W. A family of hybrid stochastic conjugate gradient algorithms for local and global minimization problems. Mathematics 2022, 10, 3595. [Google Scholar] [CrossRef]
  22. Alshamrani, A.M.; Alrasheedi, A.F.; Alnowibet, K.A.; Mahdi, S.; Mohamed, A.W. A hybrid stochastic deterministic algorithm for solving unconstrained optimization problems. Mathematics 2022, 10, 3032. [Google Scholar] [CrossRef]
  23. Deng, S.; Wan, Z. A three-term conjugate gradient algorithm for large-scale unconstrained optimization problems. Appl. Numer. Math. 2015, 92, 70–81. [Google Scholar] [CrossRef]
  24. Jian, J.; Liu, P.; Jiang, X.; Zhang, C. Two classes of spectral conjugate gradient methods for unconstrained optimizations. J. Appl. Math. Comput. 2022, 68, 4435–4456. [Google Scholar] [CrossRef]
  25. Jiang, X.; Liao, W.; Yin, J.; Jian, J. A new family of hybrid three-term conjugate gradient methods with applications in image restoration. Numer. Algorithms 2022, 91, 161–191. [Google Scholar] [CrossRef]
  26. Abubakar, A.B.; Kumam, P. A descent dai-liao conjugate gradient method for nonlinear equations. Numer. Algorithms 2019, 81, 197–210. [Google Scholar] [CrossRef]
  27. Abubakar, A.B.; Kumam, P.; Awwal, A.M.; Thounthong, P. A modified self-adaptive conjugate gradient method for solving convex constrained monotone nonlinear equations for signal recovery problems. Mathematics 2019, 7, 693. [Google Scholar] [CrossRef] [Green Version]
  28. Abubakar, A.B.; Kumam, P.; Mohammad, H.; Awwal, A.M. An efficient conjugate gradient method for convex constrained monotone nonlinear equations with applications. Mathematics 2019, 7, 767. [Google Scholar] [CrossRef] [Green Version]
  29. Aji, S.; Kumam, P.; Awwal, A.M.; Yahaya, M.M.; Sitthithakerngkiet, K. An efficient dy-type spectral conjugate gradient method for system of nonlinear monotone equations with application in signal recovery. Aims. Math. 2021, 6, 8078–8106. [Google Scholar] [CrossRef]
  30. Althobaiti, A.; Sabi’u, J.; Emadifar, H.; Junsawang, P.; Sahoo, S.K. A scaled dai-yuan projection-based conjugate gradient method for solving monotone equations with applications. Symmetry 2022, 14, 1401. [Google Scholar] [CrossRef]
  31. Ibrahim, A.H.; Kumam, P.; Abubakar, A.B.; Abubakar, J.; Muhammad, A.B. Least-square-based three-term conjugate gradient projection method for l 1-norm problems with application to compressed sensing. Mathematics 2020, 8, 602. [Google Scholar] [CrossRef] [Green Version]
  32. Sabi’u, J.; Muangchoo, K.; Shah, A.; Abubakar, A.B.; Aremu, K.O. An inexact optimal hybrid conjugate gradient method for solving symmetric nonlinear equations. Symmetry 2021, 13, 1829. [Google Scholar] [CrossRef]
  33. Su, Z.; Li, M. A derivative-free liu—Storey method for solving large-scale nonlinear systems of equations. Math. Probl. Eng. 2020, 2020, 6854501. [Google Scholar] [CrossRef]
  34. Sulaiman, I.M.; Awwal, A.M.; Malik, M.; Pakkaranang, N.; Panyanak, B. A derivative-free mzprp projection method for convex constrained nonlinear equations and its application in compressive sensing. Mathematics 2022, 10, 2884. [Google Scholar] [CrossRef]
  35. Yuan, G.; Li, T.; Hu, W. A conjugate gradient algorithm for large-scale nonlinear equations and image restoration problems. Appl. Numer. Math. 2020, 147, 129–141. [Google Scholar] [CrossRef]
  36. Abubakar, A.B.; Muangchoo, K.; Ibrahim, A.H.; Muhammad, A.B.; Jolaoso, L.O.; Aremu, K.O. A new three-term hestenes-stiefel type method for nonlinear monotone operator equations and image restoration. IEEE Access 2021, 9, 18262–18277. [Google Scholar] [CrossRef]
  37. Aji, S.; Kumam, P.; Siricharoen, P.; Abubakar, A.B.; Yahaya, M.M. A modified conjugate descent projection method for monotone nonlinear equations and image restoration. IEEE Access 2020, 8, 158656–158665. [Google Scholar] [CrossRef]
  38. Chen, X.; Zhou, W. Smoothing nonlinear conjugate gradient method for image restoration using nonsmooth nonconvex minimization. SIAM J. Imaging Sci. 2010, 3, 765–790. [Google Scholar] [CrossRef] [Green Version]
  39. Ibrahim, A.H.; Kumam, P.; Abubakar, A.B.; Yusuf, U.B.; Yimer, S.E.; Aremu, K.O. An efficient gradient-free projection algorithm for constrained nonlinear equations and image restoration. Aims. Math. 2020, 6, 235. [Google Scholar] [CrossRef]
  40. Ibrahim, A.H.; Kumam, P.; Kumam, W. A family of derivative-free conjugate gradient methods for constrained nonlinear equations and image restoration. IEEE Access 2020, 8, 162714–162729. [Google Scholar] [CrossRef]
  41. Liu, Y.; Zhu, Z.; Zhang, B. Two sufficient descent three-term conjugate gradient methods for unconstrained optimization problems with applications in compressive sensing. J. Appl. Math. Comput. 2022, 68, 1787–1816. [Google Scholar] [CrossRef]
  42. Ma, G.; Lin, H.; Jin, W.; Han, D. Two modified conjugate gradient methods for unconstrained optimization with applications in image restoration problems. J. Appl. Math. Comput. 2022, 68, 4733–4758. [Google Scholar] [CrossRef]
  43. Malik, M.; Sulaiman, I.M.; Abubakar, A.B.; Ardaneswari, G.; Sukono. A new family of hybrid three-term conjugate gradient method for unconstrained optimization with application to image restoration and portfolio selection. AIMS Math. 2023, 8, 1–28. [Google Scholar] [CrossRef]
  44. Iiduka, H.; Kobayashi, Y. Training deep neural networks using conjugate gradient-like methods. Electronics 2020, 9, 1809. [Google Scholar] [CrossRef]
  45. Peng, C.-C.; Magoulas, G.D. Advanced adaptive nonmonotone conjugate gradient training algorithm for recurrent neural networks. Int. J. Artif. Intell. Tools 2008, 17, 963–984. [Google Scholar] [CrossRef]
  46. Sabir, Z.; Guirao, J.L. A soft computing scaled conjugate gradient procedure for the fractional order majnun and layla romantic story. Mathematics 2023, 11, 835. [Google Scholar] [CrossRef]
  47. Sabir, Z.; Said, S.B.; Guirao, J.L. A radial basis scale conjugate gradient deep neural network for the monkeypox transmission system. Mathematics 2023, 11, 975. [Google Scholar] [CrossRef]
  48. Xue, W.; Wan, P.; Li, Q.; Zhong, P.; Yu, G.; Tao, T. An online conjugate gradient algorithm for large-scale data analysis in machine learning. AIMS Math. 2021, 6, 1515–1537. [Google Scholar] [CrossRef]
  49. Polyak, B.T. The conjugate gradient method in extremal problems. USSR Comput. Math. Math. Phys. 1969, 9, 94–112. [Google Scholar] [CrossRef]
  50. Dai, Y.-H.; Yuan, Y. A nonlinear conjugate gradient method with a strong global convergence property. SIAM J. Optim. 1999, 10, 177–182. [Google Scholar] [CrossRef] [Green Version]
  51. Masmali, I.A.; Salleh, Z.; Alhawarat, A. A decent three term conjugate gradient method with global convergence properties for large scale unconstrained optimization problems. AIMS Math. 2021, 6, 10742–10764. [Google Scholar] [CrossRef]
  52. Yuan, G.; Jian, A.; Zhang, M.; Yu, J. A modified hz conjugate gradient algorithm without gradient lipschitz continuous condition for non convex functions. J. Appl. Math. Comput. 2022, 68, 4691–4721. [Google Scholar] [CrossRef]
  53. Mtagulwa, P.; Kaelo, P. An efficient modified prp-fr hybrid conjugate gradient method for solving unconstrained optimization problems. Appl. Numer. Math. 2019, 145, 111–120. [Google Scholar] [CrossRef]
  54. Hager, W.W.; Zhang, H. A new conjugate gradient method with guaranteed descent and an efficient line search. SIAM J. Optim. 2005, 16, 170–192. [Google Scholar] [CrossRef] [Green Version]
  55. Huo, J.; Yang, J.; Wang, G.; Yao, S. A class of three-dimensional subspace conjugate gradient algorithms for unconstrained optimization. Symmetry 2022, 14, 80. [Google Scholar] [CrossRef]
  56. Tian, Q.; Wang, X.; Pang, L.; Zhang, M.; Meng, F. A new hybrid three-term conjugate gradient algorithm for large-scale unconstrained problems. Mathematics 2021, 9, 1353. [Google Scholar] [CrossRef]
  57. Jian, J.; Yang, L.; Jiang, X.; Liu, P.; Liu, M. A spectral conjugate gradient method with descent property. Mathematics 2020, 8, 280. [Google Scholar] [CrossRef] [Green Version]
  58. Yunus, R.B.; Kamfa, K.; Mohammed, S.I.; Mamat, M. A novel three term conjugate gradient method for unconstrained optimization via shifted variable metric approach with application. In Intelligent Systems Modeling and Simulation II; Springer: Berlin/Heidelberg, Germany, 2022; pp. 581–596. [Google Scholar]
  59. Alhawarat, A.; Salleh, Z.; Masmali, I.A. A convex combination between two different search directions of conjugate gradient method and application in image restoration. Math. Probl. Eng. 2021, 2021, 9941757. [Google Scholar] [CrossRef]
  60. Zoutendijk, G. Nonlinear programming, computational methods. In Integer and Nonlinear Programming; North-Holland Publishing: Amsterdam, The Netherlands, 1970; pp. 37–86. [Google Scholar]
  61. Wolfe, P. Convergence conditions for ascent methods. SIAM Rev. 1969, 11, 226–235. [Google Scholar] [CrossRef]
  62. Wolfe, P. Convergence conditions for ascent methods. ii: Some corrections. SIAM Rev. 1971, 13, 185–188. [Google Scholar] [CrossRef]
  63. Cantrell, J.W. Relation between the memory gradient method and the fletcher-reeves method. J. Optim. Theory Appl. 1969, 4, 67–71. [Google Scholar] [CrossRef]
  64. Han, J.; Liu, G.; Yin, H. Convergence of perry and shanno’s memoryless quasi-newton method for nonconvex optimization problems. OR Trans. 1997, 1, 22–28. [Google Scholar]
  65. Andrei, N. An unconstrained optimization test functions collection. Adv. Model. Optim. 2008, 10, 147–161. [Google Scholar]
  66. Moré, J.J.; Garbow, B.S.; Hillstrom, K.E. Testing unconstrained optimization software. Acm Trans. Math. Softw. 1981, 7, 17–41. [Google Scholar] [CrossRef] [Green Version]
  67. Dolan, E.D.; Moré, J.J. Benchmarking optimization software with performance profiles. Math. Program. 2002, 91, 201–213. [Google Scholar] [CrossRef]
  68. Ali, M.M.; Khompatraporn, C.; Zabinsky, Z.B. A numerical evaluation of several stochastic algorithms on selected continuous global optimization test problems. J. Glob. Optim. 2005, 31, 635–672. [Google Scholar] [CrossRef]
  69. Barbosa, H.J.; Bernardino, H.S.; Barreto, A.M. Using performance profiles to analyze the results of the 2006 cec constrained optimization competition. In IEEE Congress on Evolutionary Computation; IEEE: Piscataway, NJ, USA, 2010; pp. 1–8. [Google Scholar]
  70. Vaz, A.I.F.; Vicente, L.N. A particle swarm pattern search method for bound constrained global optimization. J. Glob. Optim. 2007, 39, 197–219. [Google Scholar] [CrossRef] [Green Version]
  71. Mythili, C.; Kavitha, V.; Kavitha, D.V. Efficient technique for color image noise reduction. Res. Bull. Jordan Acm 2011, 2, 41–44. [Google Scholar]
  72. Verma, R.; Ali, J. A comparative study of various types of image noise and efficient noise removal techniques. Int. J. Adv. Res. Comput. Sci. Softw. Eng. 2013, 3, 617–622. [Google Scholar]
  73. Chan, R.H.; Ho, C.-W.; Nikolova, M. Salt-and-pepper noise removal by median-type noise detectors and detail-preserving regularization. IEEE Trans. Image Process. 2005, 141, 479–1485. [Google Scholar] [CrossRef]
  74. Chen, T.; Wu, H.R. Adaptive impulse detection using center-weighted median filters. IEEE Signal Process. Lett. 2001, 8, 1–3. [Google Scholar] [CrossRef]
  75. Gao, Z. An adaptive median filtering of salt and pepper noise based on local pixel distribution. In Proceedings of the 2018 International Conference on Transportation & Logistics, Information & Communication, Smart City (TLICSC 2018), Chengdu, China, 30–31 October 2018; Atlantis Press: Amsterdam, The Netherlands, 2018; pp. 473–483. [Google Scholar]
  76. Hwang, H.; Haddad, R.A. Adaptive median filters: New algorithms and results. IEEE Trans. Image Process. 1995, 4, 499–502. [Google Scholar] [CrossRef] [Green Version]
  77. Win, N.; Kyaw, K.; Win, T.; Aung, P. Image noise reduction using linear and non-linear filtering technique. Int. J. Sci. Res. Publ. 2019, 9, 816–821. [Google Scholar]
  78. Shrestha, S. Image denoising using new adaptive based median filters. arXiv 2014, arXiv:1410.2175. [Google Scholar] [CrossRef]
  79. Soni, H.; Sankhe, D. Image restoration using adaptive median filtering. Image 2019, 6, 841–844. [Google Scholar]
  80. Cai, J.-F.; Chan, R.; Morini, B. Minimization of an edge-preserving regularization functional by conjugate gradient type methods. In Image Processing Based on Partial Differential Equations; Springer: Berlin/Heidelberg, Germany, 2007; pp. 109–122. [Google Scholar]
  81. Yu, G.; Huang, J.; Zhou, Y. A descent spectral conjugate gradient method for impulse noise removal. Appl. Math. Lett. 2010, 23, 555–560. [Google Scholar] [CrossRef]
  82. Shih, F.Y. Image Processing and Pattern Recognition: Fundamentals and Techniques; John Wiley & Sons: Hoboken, NJ, USA, 2010. [Google Scholar]
Figure 1. Representation of the curve of the function ρ s ( τ ) for 6 methods with respect to Itr and FEs criteria.
Figure 1. Representation of the curve of the function ρ s ( τ ) for 6 methods with respect to Itr and FEs criteria.
Symmetry 15 01203 g001
Figure 2. Representation of the curve of function ρ s ( τ ) of six methods regarding CPU time criterion.
Figure 2. Representation of the curve of function ρ s ( τ ) of six methods regarding CPU time criterion.
Symmetry 15 01203 g002
Figure 3. Flowchart of the proposed algorithms CGCG-HS1, CGCG-HS2, and CGCG-HZ.
Figure 3. Flowchart of the proposed algorithms CGCG-HS1, CGCG-HS2, and CGCG-HZ.
Symmetry 15 01203 g003
Figure 4. A graphical abstract of the wormking mechanism of CGCG-HS1, CGCG-HS2, and CGCG-HZ.
Figure 4. A graphical abstract of the wormking mechanism of CGCG-HS1, CGCG-HS2, and CGCG-HZ.
Symmetry 15 01203 g004
Figure 5. Initial pictures (1st row), the noisy photos with 30% salt-and-pepper fuzz (2nd row), and the pictures repaired by CGCG-HS1 (3rd row), CGCG-HZ (4th row), and CGCG-HS2 (5th row).
Figure 5. Initial pictures (1st row), the noisy photos with 30% salt-and-pepper fuzz (2nd row), and the pictures repaired by CGCG-HS1 (3rd row), CGCG-HZ (4th row), and CGCG-HS2 (5th row).
Symmetry 15 01203 g005
Figure 6. Initial Photos (1st row), the noisy photos with 50% salt-and-peppernoise (2nd row), and the repaired images by CGCG-HS1 (3rd row), CGCG-HZ (4th row), and CGCG-HS2 (5th row).
Figure 6. Initial Photos (1st row), the noisy photos with 50% salt-and-peppernoise (2nd row), and the repaired images by CGCG-HS1 (3rd row), CGCG-HZ (4th row), and CGCG-HS2 (5th row).
Symmetry 15 01203 g006
Figure 7. Initial pictures (1st row), the noisy pictures with 70% salt-and-pepper noise (2nd row), and the repaired pictures by CGCG-HS1 (3rd row), CGCG-HZ (4th row), and CGCG-HS2 (fifth row).
Figure 7. Initial pictures (1st row), the noisy pictures with 70% salt-and-pepper noise (2nd row), and the repaired pictures by CGCG-HS1 (3rd row), CGCG-HZ (4th row), and CGCG-HS2 (fifth row).
Symmetry 15 01203 g007
Figure 8. Initial pictures (1st row), the noisy pictures with 90% salt-and-pepper noise (2nd row), and the repaired pictures by CGCG-HS1 (3rd row), CGCG-HZ (4th row), and CGCG-HS2 (5th row).
Figure 8. Initial pictures (1st row), the noisy pictures with 90% salt-and-pepper noise (2nd row), and the repaired pictures by CGCG-HS1 (3rd row), CGCG-HZ (4th row), and CGCG-HS2 (5th row).
Symmetry 15 01203 g008
Table 1. The Number of Iterations (Itr).
Table 1. The Number of Iterations (Itr).
PrnCGCGHZDYLSFRHSPrnCGCGHZDYLSFRHS
16000324027023833F3970,000222222
2100,0003979FFFF40240,000222222
3800,0002621F101F1014160008314052454830
460002221132722284230,0001972001668914745
590,0002419254926384340003593521462F1501
624,0001813183979284410,0001717205631757
748,000221818521629454000575865755457
827001818304337244680,00086134133123172111
927,00019172265232347500,000144151203190268176
1012,0002121542847424830063582311789771712623
1190,00027592060193649200021081450F2762F2374
12240044940956242978926450150,000356587F219F191
1348,0001178111118311977F121751500022222F
1415,000770709F1038F5345250,00022222F
1560,0006471089F1415F135453500,00022222F
1612,0005762007F6151161499543001663187811322159F1845
1790,000856591F1627F122655100,0003555F2629FF
186000502342F636F37856250,0005697F2030FF
19150,000442812F20582810150157500,000120149F523FF
203601616107915122486F1501581002047FF1326F2566
213000407412F553F41859500073193F942312401
2215,000529556442515F5626080,0005380F264233F
2312,000467762F533F445611502202270417452907F2345
24120,0006207211515921F58962500243241400235573168
252400296349F395F3066350009601169FFFF
2624,000459420F552F3906420001212126639F
271502002245513172848F11326520,0001010114812523
2890008781901181587266500,000171232548108F
2990,000108637112487916780063771920371916FF
3050006160555954676820001061749FFFF
31150,0001141151251521661246980006861025651709FF
32700040515458124687050,00017311983FFFF
3340,00043435369100172715005832911481641F2214
34500,00044431941672012997220001687857FFFF
351008621574133199170103735001621782976714648
3610007550113124F47741000112123103618943
3750,00099138164246FF75500645531610238783
38200,000239307350296FF768000959640111173877
Table 2. The Number of Function Evaluations (FEs).
Table 2. The Number of Function Evaluations (FEs).
PrnCGCGHZDYLSFRHSPrnCGCGHZDYLSFRHS
16000921008781050103F3970,000151515151515
2100,000113201FFFF40240,000131313131313
3800,0009985F536F408416000256364203272193177
46000839070163811724230,000751670552647603237
590,0008782873679117943400026772134715,993F1528
6240007876812552161684410,000838385609684451
748,00085818131280152454000188174170331223170
827007775922921081294680,000308376311557624342
927,0008087864678613847500,000498475423815823468
1012,00081811321471352174830014061770124634723811990
1190,00095183863949517449200046773108F9941F3775
1224009698786211509172544450150,0009591427F1075F572
1348,0002547236518966952F18655150001111111111F
1415,00017411483F3753F8765250,0001616161616F
1560,00014282321F5006F209853500,0001212121212F
1612,00012524374F22612581784543003730396311817450F2904
1790,00018661273F5728F192655100,000134184F11,605FF
1860001157722F2305F63556250,000164268F9226FF
19150,00010101732F75276240244557500,000314399F2562FF
203603603230215668988F2360581004714FF3219F4587
213000955914F2053F697595000173409F46253081259
2215,000114311835161853F8916080,000154190F1156550F
2312,00010721651F1852F6996115047905752182610,182F3578
24120,0001358156415843292F932625005054874678461772299
252400659751F1550F49163500021462550FFFF
2624,000979942F1855F636642000636363728165F
271504519530913589937F17596520,000727278527293155
28900025024526671842929366500,000107772256011614F
2990,000302208203688222298678004542566618,07918,920FF
30500018718816426819919568200075344414FFFF
31150,000338333281651569371698000147821908812701FF
3270001212561492953603777050,00039674354FFFF
3340,000112150209425642894715004126106712,9065205F18,987
34500,000179160101212591040210772200013,3644084FFFF
3510025144234449116456539473500547495811460509258
361000367185793944F175741000378386366444296209
3750,000723115613712360FF75500134125372404813167
38200,0002238289530072885FF7680002282054634041622159
Table 3. The CPUT.
Table 3. The CPUT.
PrnCGCGHZDYLSFRHSPrnCGCGHZDYLSFRHS
160000.130.100.9910.930.13F3970,0000.210.240.290.190.300.21
2100,0002.222.76FFFF40240,0000.620.680.610.530.690.62
3800,00019.6512.15F80.67F60.774160000.030.090.110.060.190.02
460000.410.330.420.850.490.704230,0002.412.711.741.702.710.72
590,0004.644.314.5117.836.169.6343400017.4946.1618.48812.32F82.43
624,0001.301.341.243.944.252.574410,00027.2624.9722.131627.7826.26115.85
748,0002.772.452.419.223.004.474540000.390.350.390.520.470.31
827000.180.180.230.720.370.294680,00011.5813.5910.0615.6822.4011.88
927,0001.521.471.437.991.792.3747500,00097.81104.7796.01163.74183.6195.10
1012,0000.650.661.041.181.351.62483000.170.190.170.550.460.08
1190,0005.3710.184.4920.076.529.254920001.430.96F2.02F1.00
1224001.671.591.162.813.860.6850150,00021.7029.56F15.78F9.46
1348,00065.3968.5853.76204.81F52.205150000.030.030.100.070.27F
1415,00018.0913.91F37.72F8.695250,0000.340.370.390.280.34F
1560,00055.8080.29F182.95F74.4453500,0002.503.173.562.512.86F
1612,00010.7934.13F18.5123.956.03543000.420.380.210.440.600.24
1790,000103.8069.11F309.50359.16101.0755100,0002.203.19F151.62272.38F
1860005.012.99F10.0525.602.4856250,00010.0516.87F484.03983.93F
19150,00087.50164.52F694.51633.25228.2557500,00040.0549.90F266.802010.13F
203601.541.760.8214.20F1.12581000.82FF0.431.540.63
2130002.642.62F4.28F1.485950000.130.29F0.303.730.73
2215,00011.7112.914.4414.98F7.046080,0001.541.66F8.665.27F
2312,0009.5212.55F13.64F4.42611500.780.510.240.51F0.29
24120,000105.00115.81106.79208.11F60.85625000.120.060.110.110.270.07
2524001.271.55F2.17F0.716350001.661.27FFFF
2624,00016.3314.79F23.12F8.786420000.020.030.031.590.09F
271500.610.550.170.45F0.136520,0000.190.180.1910.160.240.31
2890000.300.220.220.480.340.2366500,0007.264.9213.53336.8436.76F
2990,0003.401.751.334.291.562.09678000.920.993.193.17FF
3050000.420.460.370.520.440.446820003.051.60FFFF
31150,00021.5322.6415.0937.3734.9123.846980002.743.281.453.68FF
3270000.461.020.511.261.111.437050,00041.4740.4927.65FFF
3340,0002.403.233.519.2011.5816.90715000.610.202.040.70F2.53
34500,00044.3442.78241.73257.40207.06462.337220006.511.95FFFF
351000.300.410.050.120.110.03735001.161.101.920.901.470.54
3610000.080.080.170.13F0.037410003.333.513.253.372.891.63
3750,0006.178.029.4713.19FF755000.280.300.850.712.140.42
38200,00069.7580.9680.4470.40FF76800067.6163.67107.70105.84560.6940.32
Table 4. Results regarding the performance of CGCG-HS1, CGCG-HS2, and CGCG-HZ Algorithms for repairing corrupted pictures.
Table 4. Results regarding the performance of CGCG-HS1, CGCG-HS2, and CGCG-HZ Algorithms for repairing corrupted pictures.
ImageNoise RatioCGCG-HS1CGCG-HZCGCG-HS2
TcpuItrPSNRTcpuItrPSNRTcpuItrPSNR
lena30%16.131437.2214.421237.2116.46491161936.9968113
lena50%27.901734.5819.641334.6028.999233811734.58358745
lena70%39.522331.3728.872031.3733.29623942031.4504158
lena90%47.872626.3341.132626.2254.668808792626.11239207
hill30%16.761934.9215.671534.8716.724751271634.89737541
hill50%28.862132.6723.171432.5730.660042412132.66618695
hill70%33.091829.6929.011729.6835.824869062329.8987122
hill90%46.692525.6440.182325.5549.829431462725.65882636
man30%21.911531.6916.641231.6314.541895681631.5757247
man50%29.922029.3021.721229.2326.769812772029.30118531
man70%38.222426.4533.672026.3835.669186552226.46714801
man90%48.012622.5240.872422.5251.935108372822.52534104
boat30%17.321633.6617.251433.6218.766626821833.65937063
boat50%26.251431.1524.191631.1227.754355951431.14532822
boat70%39.572128.3029.921928.2635.846343292028.30099363
boat90%47.592524.0944.983124.2047.626957712524.11558254
Total525.62324479.58441.33288479.02525.38332479.35
Disclaimer/Publisher’s Note: The statements, opinions and data contained in all publications are solely those of the individual author(s) and contributor(s) and not of MDPI and/or the editor(s). MDPI and/or the editor(s) disclaim responsibility for any injury to people or property resulting from any ideas, methods, instructions or products referred to in the content.

Share and Cite

MDPI and ACS Style

Ali, E.; Mahdi, S. A Family of Developed Hybrid Four-Term Conjugate Gradient Algorithms for Unconstrained Optimization with Applications in Image Restoration. Symmetry 2023, 15, 1203. https://doi.org/10.3390/sym15061203

AMA Style

Ali E, Mahdi S. A Family of Developed Hybrid Four-Term Conjugate Gradient Algorithms for Unconstrained Optimization with Applications in Image Restoration. Symmetry. 2023; 15(6):1203. https://doi.org/10.3390/sym15061203

Chicago/Turabian Style

Ali, Eltiyeb, and Salem Mahdi. 2023. "A Family of Developed Hybrid Four-Term Conjugate Gradient Algorithms for Unconstrained Optimization with Applications in Image Restoration" Symmetry 15, no. 6: 1203. https://doi.org/10.3390/sym15061203

Note that from the first issue of 2016, this journal uses article numbers instead of page numbers. See further details here.

Article Metrics

Back to TopTop