Next Article in Journal
Computational Study of Singularly Perturbed Neurodynamical Models via Cubic B-Spline
Previous Article in Journal
Logarithmic Connections on Principal Bundles and Their Applications to Geometric Control Theory
 
 
Font Type:
Arial Georgia Verdana
Font Size:
Aa Aa Aa
Line Spacing:
Column Width:
Background:
Article

Improvement of Barzilai and Borwein Gradient Method Based on Neutrosophic Logic System with Application in Image Restoration

by
Predrag S. Stanimirović
1,
Branislav D. Ivanov
1,
Marko Miladinović
1,* and
Dragiša Stanujkić
2,3
1
Faculty of Sciences and Mathematics, University of Niš, Višegradska 33, 18000 Niš, Serbia
2
Technical Faculty in Bor, University of Belgrade, Vojske Jugoslavije 12, 19210 Bor, Serbia
3
College of Global Business, Korea University, Sejong 30019, Republic of Korea
*
Author to whom correspondence should be addressed.
Axioms 2026, 15(1), 11; https://doi.org/10.3390/axioms15010011
Submission received: 7 November 2025 / Revised: 15 December 2025 / Accepted: 23 December 2025 / Published: 25 December 2025
(This article belongs to the Section Mathematical Analysis)

Abstract

An upgrade to the quasi-Newton (QN) family of methods for solving unconstrained optimization problems is proposed. This research focuses on a detailed investigation of the Barzilai and Borwein (BB) gradient methods. The upgrade involves the use of neutrosophic logic to determine an additional parameter that will be incorporated into an appropriate step size for the BB iterations. Unlike previous research, which incorporated neutrosophic concepts into gradient methods by using only two objective-function values to calculate the input parameter during the neutrophication phase, this study determines the input parameter using three consecutive objective-function values. The main idea is to use appropriately defined membership functions to perform neutrosophication and de-neutrosophication. The set of if–then rules is based on two or more successive values of the objective function. This strategy also directly influences the design of the newly proposed method. Numerical comparisons demonstrate superior performance of the proposed methods with respect to Dolan–Moré performance profiles including the number of iterations, central processing unit (CPU) time, and number of function evaluations. Furthermore, experimental results confirm that the proposed algorithms can be effectively applied to image restoration tasks, particularly for image denoising, where they achieve competitive reconstruction quality and stable convergence behavior.

1. Introduction, Preliminaries, and Motivation

The current research topic focuses on solving the unconstrained optimization problem
min f ( x ) , x R n ,
where f : R n R is uniformly convex and a twice continuously differentiable objective function. The most common iterative rule used to solve the multivariable unconstrained minimization problem (1) is the family of line search methods
x k + 1 = x k + t k d k .
In this equation, x k + 1 represents a new point, x k is the previous point, t k > 0 is the step size, and d k is a search (descent) direction. In gradient descent (GD) methods, the direction d k is defined as d k = g k , where g k = f ( x k ) represents the gradient of f at the point x k [1].
The minimization problem outlined in (1) can be solved using the quasi-Newton (QN) method supported by line search
x k + 1 = x k + t k d k ,
where the search direction is determined as
d k = H k g k ,
under the assumption that B k is a positive definite symmetric approximation of the Hessian matrix G k = 2 f ( x k ) and its inverse is denoted by H k = B k 1 [2]. The matrix H k must be positive definite in order to satisfy the QN requirement
H k y k 1 = s k 1 ,
where s k 1 = x k x k 1 and y k 1 = g k g k 1 [1].
The storage space of QN iterations is at least O ( n 2 ) and poses challenges when solving large-scale models. Brezinski in [3] classified methods for updating B k into three common categories: scalar approximations (where B k = λ k I , with I being an appropriate identity matrix), diagonal matrices ( B k = diag λ 1 , , λ n ), and proper full matrices. In this research, we utilize the scalar Hessian’s estimate
B k = λ k I G k , λ k > 0 .
Such an approach was investigated in [4,5,6,7,8]. The practical implications of using the scalar Hessian’s estimate for solving large-scale models is minimal storage requirements and a significant reduction in computational complexity when calculating iterations compared to other classes of methods for updating B k .
Barzilai and Borwein, in their work [9], proposed the two-point step-size gradient method, known as the BB method. This method follows the iterative pattern
x k + 1 = x k λ k I g k = x k λ k g k ,
where I denotes the corresponding identity matrix. To comprise second-order information about the goal function, the step size λ k should approximately satisfy the subsequent least squares QN property
min λ > 0 y k 1 1 λ s k 1 ,
and
min λ > 0 λ y k 1 s k 1 ,
where s k 1 and y k 1 are consistent with (5). The solutions to (8) and (9), respectively, are equal to
λ k B B 1 = s k 1 T s k 1 s k 1 T y k 1 ,
and
λ k B B 2 = s k 1 T y k 1 y k 1 T y k 1 .
Additionally, under the condition s k 1 T y k 1 > 0 , the parameters defined in (10) and (11) consistently satisfy λ k B B 1 λ k B B 2 .
Raydan [10] established that the BB algorithm is globally convergent when applied to a strictly convex objective f ( x ) . Additionally, Dai and Liao [11] showed that the BB algorithm achieves R-linear convergence. However, in scenarios where f is continuously differentiable, the step sizes λ k B B 1 and λ k B B 2 in the BB algorithm can be harmful, and a monotonic decrease of f is not guaranteed. To address this weakness, Raydan in [12] proposed a security strategy focused on step size and introduced a globally convergent and non-monotonic line search BB algorithm.
However, the classical BB method has an inherent limitation: its step sizes depend solely on two consecutive points, neglecting the broader trajectory of the optimization process. This limitation can lead to oscillatory behavior or inefficient step sizes. Its tri-valued structure models uncertainty in descent progress and enhances the adaptivity of the method. While several BB variants exist, including adaptive step-size, safe BB, and non-monotonic line search methods, our approach is novel in integrating neutrosophic logic to adjust the step size based on multiple previous iterations. This strategy provides a richer, tri-valued assessment of descent trends and improves convergence robustness.
The BB iterative rule has attracted considerable research interest and resulted in several significant outcomes [13,14,15,16,17]. The advantages of the BB algorithm stem from the fact that λ k B B 1 and λ k B B 2 are closely aligned with the QN condition.
Recent studies underscore the value of incorporating quasi-Newton and conjugacy information into gradient-based methods. For instance, ref. [18] introduced spectral hybrid conjugate gradient methods that combine memoryless BFGS directions with the Dai–Liao conjugacy condition, resulting in improved performance and robustness for large-scale unconstrained problems. Similarly, ref. [19] developed a unified conjugate gradient framework with rigorous convergence analysis, demonstrating effectiveness in image restoration applications. The authors of [20] defined a new quasi-Newton method based on the variational procedure aimed to refine the gradient and Hessian matrix approximations.
Fuzzy set (FS), intuitionistic fuzzy set (IFS), and neutrosophic set (NS) are practical algorithmic approaches for solving mathematical models characterized by unreliability, fuzziness, ambivalence, inaccuracy, insufficient confidence, variability, and excessiveness [21]. NSs have been used in denoising, segmentation, clustering, and classification in various image-processing applications. Several applications of neutrosophic systems were discussed in [22].
Definition 1 
([23]). Let U be a nonempty set with a generic element from x U . Then, a FS U is a set of elements
= { x , μ F ( x ) | x U } ,
in which the membership function (MF) μ F ( x ) [ 0 , 1 ] signifies membership degree of the element x to ℧.
Definition 2 
([24]). An IFS U is defined by ordered triples
= { x , μ F ( x ) , ν F ( x ) | x U } ,
such that μ F ( x ) : U [ 0 , 1 ] and ν F ( x ) : U [ 0 , 1 ] indicate the degree of the membership and the degree of the non-membership of the element x to ℧, respectively, with the restriction 0 μ F ( x ) + ν F ( x ) 1 .
Definition 3 
([25,26,27]). Let U be the universe of discourse. A single-valued NS U is an arbitrary entity of the pattern
= { x , T N ( x ) , I N ( x ) , F N ( x ) | x U } ,
in which T N ( x ) , I N ( x ) , F N ( x ) : U [ 0 , 1 ] are the truth MF, the indeterminacy MF, and the falsity MF, respectively, satisfying 0 T N ( x ) + I N ( x ) + F N ( x ) 3 .
It is known that the determination of λ k B B 1 and λ k B B 2 relies solely on the information of the two points x k and x k 1 . Specifically, the quantities s k 1 = x k x k 1 and y k 1 = g k g k 1 are used.
Momentum has been verified as an efficient tool to enhance gradient descent (GD) by incorporating information from previous steps of the algorithm into the next step. Our objective is to implement gradient descent with momentum (GD+M) using fuzzy logic systems (FLSs) and neutrosophic logic systems (NLSs) to create adaptive self-corrective gradient-based optimization methods. These methods include gradient descent, conjugate gradient, and QN methods, as well as the BB gradient methods. The GD+M upgrade in our approach utilizes using adopted fuzzy logic systems and neutrosophic logic systems to determine a corrective parameter that will be used as an adaptive learning quantity in defining multiple step sizes to generate the final step length for solving unconstrained optimization problems. The main idea is to use appropriately defined membership functions to perform neutrosophication and denetrosophication. Appropriate if–then rules in the underlying logic system rely on two or more successive values of the objective function.
Inspired by the findings in [28,29], and research in [30,31] regarding FLSs, we designed multiple step sizes to generate the final step length for solving unconstrained optimization models. We aim to develop and evaluate an improvement in the BB algorithm by applying a suitable fuzzy logic (FL) system. We will refer to these modifications as the Fuzzy Barzilai and Borwein (FBB) algorithms. The composite step size will be generated based on various parameters, which will help determine the final step length in QN algorithms.
In Section 2, we introduce multiple step sizes that are used to determine the final step length for solving unconstrained optimization problems. This approach leads to developing an improved Barzilai and Borwein gradient method based on the neutrosophic logic (NL) system. Section 3 explores the convergence properties of the proposed method. Numerical experiments are conducted in Section 4, aiming to demonstrate the effectiveness of the proposed approach. In addition, Section 4.1 presents a practical application of the proposed methods in image restoration tasks, specifically for image denoising. Some final conclusions and directions for future research are given in Section 5.

2. BB Method Enhanced by the Neutrosophic Logic System

The following idea, originated in [30,31], defines fuzzy GD (FGD) iterations in the form
x k + 1 = x k + t k ν k d k ,
where d k is a descent direction, ν k is an appropriately defined fuzzy parameter, and the step size t k is computed using an inexact line search. In general, any if–then rule for ν k should satisfy the specified conditions
ν k > 1 , if f ( x k ) > f ( x k + 1 ) , < 1 , if f ( x k ) < f ( x k + 1 ) , = 1 , if f ( x k ) = f ( x k + 1 ) .
Starting from general QN iterations
x k + 1 = Q N ( x k ) = x k t k H k g k ,
we define the general fuzzy quasi-Newton (FQN) iterative scheme with line search as the following transformation Ψ :
x k + 1 = Ψ ( ( Q N ) ( x k ) ) = Ψ ( x k t k H k g k ) = x k ν k t k H k g k .
As mentioned in Section 1, we aim to improve the BB iterative formula (7) by introducing additional parameters that influence the step size. Consequently, the FBB gradient method is defined using the function Ψ :
x k + 1 = Ψ ( ( B B ) ( x k ) ) = Ψ ( x k t k λ k B B g k ) = x k ν k t k λ k B B g k .
In this equation, ν k signifies a suitably defined fuzzy parameter, and the λ k B B parameter is calculated using (10) or (11). The step size t k is determined through the backtracking line search outlined in Algorithm 1, as referenced in [7,32].
Algorithm 1 Backtracking line search.
Input: 
Goal function f ( x ) , a descent direction d k at the point x k , and numbers 0 < δ < 0.5 and π ( 0 , 1 ) .
1:
τ = 1 .
2:
While f ( x k + τ d k ) > f ( x k ) + δ τ g k T d k , take τ : = τ π .
3:
Return t k = τ .
To apply the iterative scheme (18), we must define the fuzzy parameter ν k following the constraints specified in (16). The basic idea is to introduce some kind of intelligence in the process of defining the total step length based on monitoring the behavior of the objective function in previous iterations. Defining the parameter ν k involves two phases: Neutrosophication and de-neutrosophication.
(1)
Neutrosophication utilizes three MFs and transforms the input
ϑ k : = l k | l k 1 | + | l k | , if f ( x k 1 ) 0 and f ( x k 2 ) 0 , 0.5 , otherwise ,
where l k = f ( x k ) f ( x k 1 ) .
The rationale for using three function values in (19) is to capture the overall trend of the optimization process, rather than just the immediate step. The parameter ϑ k measures the relative change in function decrease. For instance, if f ( x k 1 ) is much smaller than f ( x k 2 ) , and f ( x k ) is further smaller compared to f ( x k 1 ) , then ϑ k will reflect strong consistent descent. Conversely, if the function decrease slows down or reverses, ϑ k will indicate this deterioration. This three-point evaluation provides richer information than the two-point BB formulas, enabling the neutrosophic system to make more informed adaptive corrections.
In Equation (19), we can see that the value of the parameter ϑ k depends on three values of the objective function: f ( x k 2 ) , f ( x k 1 ) , and f ( x k ) . This approach represents a significant difference in the phase of neutrosophication. In contrast, the determination of ϑ k in references [30,31] relies on only two values of the objective function. Furthermore, the method used to calculate the parameter ϑ k is also different.
We will examine values ν k of the gain parameter, estimated in terms of three MFs that represent the percentage of truth, indeterminacy, and falsity.
The Gaussian function is widely used to model the level of indeterminacy in neutrosophic models, especially in fields such as image processing, optimization, classification, and decision-making. The usefulness of this function stems from its ability to smoothly and controllably describe localized phenomena that exhibit a maximum at a specific point and gradually decay with increasing distance from that point. The Gaussian function is ideal for representing regions where values change gradually and where there is overlap among the T, I, and F components, thereby enabling a more realistic handling of uncertainty than functions with abrupt transitions. In neutrosophic systems and related fields, the sigmoid function is commonly used to model membership transitions, soft thresholds, and probabilities, thereby enabling gradual, rather than abrupt, changes between states.
The truth MF is modeled by the sigmoid function
T ( ϑ ) = 1 / ( 1 + e c 1 ( ϑ c 2 ) ) .
The parameter c 1 defines the slope at the crossover point ϑ = c 2 . The falsity MF is the sigmoid function
F ( ϑ ) = 1 / ( 1 + e c 1 ( ϑ c 2 ) ) .
The indeterminacy MF is modeled by the Gaussian function
I ( ϑ ) = 1 σ 2 π e ( ϑ c 2 ) 2 2 σ 2 .
The variable σ R represents the standard deviation, while c 2 R denotes the mean. The value c 2 represents the center of the Gaussian function, determining the point at which it attains its maximum value. Changing the value of c 2 shifts the curve horizontally without altering its shape. The parameter σ , on the other hand, controls how rapidly the function decays from its maximum. In other words, σ controls the “width of the curve”: a smaller σ produces a narrower and sharper curve with a higher peak, while a larger σ yields a wider and flatter curve with a lower peak.
Based on these definitions, the neutrosophication of ϑ R involves transforming a single real quantity ϑ into the ordered triple T ( ϑ ) , I ( ϑ ) , F ( ϑ ) . The MFs for this transformation are defined in (20)–(22).
The parameters c 1 , c 2 , and σ control the shape of the membership functions. Specifically, c 1 determines the slope of the sigmoid functions T ( ϑ ) and F ( ϑ ) at their crossover point ϑ = c 2 . The parameter c 2 determines the intersection point of the T ( ϑ ) and F ( ϑ ) functions and also affects the peak location of I ( ϑ ) . On the other hand, σ controls the width of the Gaussian indeterminacy function I ( ϑ ) .
This paper investigates two combinations for the parameters’ configurations:
( 1 ) : c 1 = 0.2 , c 2 = 0.5 , σ = 1 2 π and ( 2 ) : c 1 = 1 , c 2 = 2 , and σ = 1 .
Graphs of T ( ϑ ) , I ( ϑ ) , F ( ϑ ) under the configuration ( 1 ) in (20)–(22) are plotted in Figure 1a for ϑ [ 8 , 8 ] . Graphs of T ( ϑ ) , I ( ϑ ) , F ( ϑ ) for the configuration ( 2 ) in (20)–(22) are plotted in Figure 1b for ϑ [ 8 , 8 ] . Figure 1a depicts gradual sigmoid transitions centered at ϑ = 0.5 with a narrow indeterminacy peak reaching around 1.0 . Figure 1b exhibits steeper transitions at ϑ = 2 and a broader, flatter indeterminacy function with a peak around 0.4 ).
To minimize f ( x ) , employing (19) as a measure in the neutrosophic logic controller (NLC), we will take into account the dynamic neutrosophic set (DNS) defined as D : = T ( ϑ k ) , I ( ϑ k ) , F ( ϑ k ) ; ϑ k R over the real numbers R .
(2)
De-neutrosophication is defined as transformation ϑ k : T ( ϑ k ) , I ( ϑ k ) , F ( ϑ k ) ν k ( ϑ k ) R resulting in a single-value ν ( ϑ , t ) . The following de-neutrosophication is adopted to acquire the parameter ν k ( ϑ k ) :
ν k ( ϑ k ) = 3 ( T ( ϑ k ) + I ( ϑ k ) + F ( ϑ k ) ) Δ ( ϑ k ) , f ( x k ) < f ( x k 1 ) , 1 3 ( T ( ϑ k ) + I ( ϑ k ) + F ( ϑ k ) ) 1 Δ ( ϑ k ) , f ( x k ) > f ( x k 1 ) , 1 , otherwise .
The fulfillment of essential constraint ν k ( ϑ k ) 0 follows directly from (23) and the fact that all MFs are with codomains in [ 0 , 1 ] . Also, ν k should satisy ν k < 1 if f ( x k ) > f ( x k 1 ) and ν k > 1 if f ( x k ) < f ( x k 1 ) and ν k = 1 if f ( x k ) = f ( x k 1 ) .
Graphs of Δ ( ϑ k ) and 1 Δ ( ϑ k ) for the environment ( 1 ) are presented in Figure 2a, while Figure 2b shows the graphs of Δ ( ϑ k ) and 1 Δ ( ϑ k ) for the configuration ( 2 ) in the interval ϑ [ 8 , 8 ] . The purple (respectively, green) graphs in both Figure 2a,b illustrate Δ ( ϑ k ) (respectively, 1 Δ ( ϑ k ) ). Clearly, the minimum of Δ ( ϑ k ) and maximum of 1 Δ ( ϑ k ) are achieved at the point ϑ k = c 2 . Reasoning upon (16) and (23), the minimal increase from t k to t k ν k in FGD iterations (15) in the case f ( x k ) > f ( x k + 1 ) is achieved for θ k = c 2 . On the other hand, the minimal decrease from t k to t k ν k in (15) in the case f ( x k ) < f ( x k + 1 ) is achieved for θ k = c 2 . Also, greater values σ lead to smaller fluctuations in values Δ ( ϑ k ) and 1 Δ ( ϑ k ) . It is logical to adjust the parameters so that the influence of neutrosophy weakens during iterations and stops near the minimum.
Following this tendency, the parameter’s setting for performed numerical experiments is defined as in ( 1 ) . This choice is motivated by the convergence behavior of the input variable ϑ k defined in (19). As the algorithm approaches the minimum, ϑ k converges toward 0.5 , since l k = f ( x k ) f ( x k 1 ) 1 . Setting c 2 = 0.5 aligns the crossover point with this behavior, ensuring balanced membership values near the optimum. Choosing c 1 = 0.2 provides gradual transitions, preventing sharp changes in ν k that could destabilize convergence. The parameter σ = 1 2 π yields a sharp Gaussian peak reaching approximate unity at ϑ = c 2 , concentrating high indeterminacy in the crossover region. Preliminary tests confirmed stable convergence with this parameter’s setting.
The bounds 1 Δ ( ϑ k ) 2 and 0.5 1 Δ ( ϑ k ) 1 are perceptible in Figure 2. Based on this observation, the following estimate applies for the parameter ν k in (23):
0.5 ν k 2 .
Algorithm 2 outlines the general BB method utilizing the neutrosophic logic system (FBB method).
Algorithm 2 FBB method.
Input: 
A real function f ( x ) and selected starting point x 0 d o m ( f ) .
1:
Set k = 0 , initialize λ 0 = 1 , ν 0 = ν 1 = 1 , c 1 = 0.2 , c 2 = 0.5 , σ = 1 2 π , and calculate f ( x 0 ) and g 0 = f ( x 0 ) .
2:
(Backtracking) Determine t 0 ( 0 , 1 ] as the output of Algorithm 1.
3:
Compute x 1 using (18).
4:
Calculate g 1 = f ( x 1 ) , f ( x 1 ) and set k : = 1 .
5:
If test criteria are fulfilled, stop; otherwise, go to the next step.
6:
Calculate s k 1 , y k 1 , and λ k using (10) or (11).
7:
(Backtracking) Determine t k ( 0 , 1 ] as the output of Algorithm 1.
8:
Compute x k + 1 using (18).
9:
Calculate g k + 1 = f ( x k + 1 ) and f ( x k + 1 ) .
10:
Compute ϑ k + 1 according to (19).
11:
Compute T ( ϑ k + 1 ) , I ( ϑ k + 1 ) , F ( ϑ k + 1 ) using (20)–(22).
12:
Compute ν k + 1 ( ϑ k + 1 ) utilizing (23).
13:
Set k : = k + 1 and go to Step 5.
14:
Return { x k + 1 , f ( x k + 1 ) } .
Inspired by the previous application of the neutrosophic logic system in Algorithm 2, we want to go one step further. More precisely, we want to show that this neutrosophic logic system can be implemented in other variants of the BB method. Specifically, we want to show the application of the neutrosophic logic system in a hybrid BB-type method for solving large-scale unconstrained optimization problems presented by Gao and Ou in the paper [33]. In the paper [33], the authors based Algorithm HBB on the combination of the BB method with the idea of the IMPBOT Algorithm proposed by Brown and Biggs in [34]. Below, we present a modified version of the HBB Algorithm based on the DNS, that is, fuzzy hybrid BB-type (FHBB) algorithms.
The notation FBB1 refers to the FBB method in which λ k is defined in Step 6 of Algorithm 2, utilizing (10). In the dual case, FBB2 refers to the FBB method that determines λ k using (11) in Step 6 of Algorithm 2. For the FHBB method, there are also two variants: FHBB1 refers to the version that determines λ k in Step 4 of Algorithm 3 using (25), while FHBB2 refers to the variant that determines λ k in Step 4 using (26).
Algorithm 3 FHBB method.
Input: 
A real function f ( x ) and selected starting point x 0 d o m ( f ) .
1:
Set k = 0 , calculate f ( x 0 ) , g 0 = f ( x 0 ) , initialize h 0 = 10 , ν 0 = 1 , c 1 = 0.2 , c 2 = 0.5 , σ = 1 2 π , ρ ( 0 , 1 ) , 0 < λ m i n < λ m a x < + , θ k [ 0 , 1 ) , and R 0 = f 0 .
2:
If test criteria are fulfilled, stop; otherwise, go to the next step.
3:
If k = 0 , determine t 0 ( 0 , 1 ] as the output of Algorithm 1, where d 0 = g 0 , and set x 1 = x 0 + t 0 d 0 , s 1 : = s 0 , y 1 : = y 0 ; otherwise, go to Step 4.
4:
Calculate λ k using
λ k H B B 1 = s k 1 T y ¯ k 1 y ¯ k 1 T y ¯ k 1
or
λ k H B B 2 = s k 1 T s k 1 s k 1 T y ¯ k 1 ,
where y ¯ k 1 = y ˜ k 1 + s k 1 h k , y ˜ k 1 = y k 1 + ϑ ˜ k s k 1 , ϑ ˜ k = ( g k 1 + g k ) T s k 1 + 2 ( f k 1 f k ) s k 1 2 .
     If λ k < 0 , set λ k = λ m a x ; otherwise, set λ k = m i n { λ m a x , m a x { λ m i n , λ k } } .
5:
Compute d k = λ k h k λ k + h k g k .
6:
If the condition f ( x k + d k ) R k + ρ g k T d k , where
R k = θ k R k 1 + ( 1 θ k ) f k ,
θ k + 1 = θ k 2 , if k = 0 , θ k + θ k 1 2 , if k 1
holds, set x k + 1 = x k + ν k d k and h k + 1 = 2 h k , and go to Step 7; otherwise, find the step size t k along the direction d k by using the modified backtracking line search rule f ( x k + t k d k ) R k + δ t k g k T d k and set x k + 1 = x k + ν k t k d k and h k + 1 = 1 2 h k .
7:
Calculate g k + 1 = f ( x k + 1 ) , f ( x k + 1 ) , s k , and y k .
8:
Compute ϑ k + 1 according to (19).
9:
Compute T ( ϑ k + 1 ) , I ( ϑ k + 1 ) , F ( ϑ k + 1 ) using (20)–(22).
10:
Compute ν k + 1 ( ϑ k + 1 ) utilizing (23).
11:
Set k : = k + 1 and go to Step 2.
12:
Return { x k + 1 , f ( x k + 1 ) } .

3. Convergence Analysis

This section focuses on the convergence properties of the FBB method outlined in Algorithm 2. To establish global convergence results, we require the following assumptions on the objective function f ( u ) .
Assumption 1.  ( A 1 ) The level set M = { u R n | f ( u ) f ( u 0 ) } is bounded, where u 0 of (2) is defined in the context of the initial iterate.
( A 2 ) The objective function f ( u ) is continuously differentiable in a neighborhood N inside M . Furthermore, the gradient g is Lipschitz continuous, meaning the existence of a constant L > 0 , such that the following condition holds:
g ( u ) g ( v ) L u v , u , v N .
Additionally, Assumption 1 implies the existence of constants C > 0 and κ > 0 that satisfy
u v C , u , v N
and
g ( u ) κ , u N .
A key aspect of demonstrating convergence is the inequality
s k 1 T y k 1 χ s k 1 2 ,
where χ > 0 is the strong convexity constant. This inequality is a direct consequence of the uniform (strong) convexity of the objective function and follows from ([2], Theorem 1.3.16). It should be emphasized that inequality (30) is not part of Assumption 1 but requires the additional assumption that f is uniformly convex, as stated in Proposition 2. From (27) and (30), one can conclude
χ s k 1 2 s k 1 T y k 1 L s k 1 2 ,
and consequently, χ L .
Moreover, several auxiliary results from [32,35,36,37,38] are employed in the subsequent analysis.
Proposition 1 
([32,38]). Let d k be selected as a descent direction, and the gradient g ( u ) = f ( u ) satisfies the Lipschitz condition (27). Then, the step length t k derived from the backtracking line search Algorithm 1 satisfies
t k min 1 , π ( 1 δ ) L g k T d k d k 2 .
The proofs of Proposition 2 and Lemma 1 can be found in [36,37]. For the sake of completeness, we nevertheless present the proof of the following proposition.
Proposition 2. 
If the objective function f : R n R is twice continuously differentiable and uniformly convex on R n , then Assumption 1 is satisfied.
Proof. 
Since f is uniformly convex and twice continuously differentiable on R n , there exists m > 0 such that
f ( u ) f ( u * ) + m 2 u u * 2 , u R n ,
where u * is the unique minimizer of f . Hence, for any initial point u 0 , the level set
M = { u R n f ( u ) f ( u 0 ) }
satisfies
M u R n u u * 2 2 ( f ( u 0 ) f ( u * ) ) m ,
which is a bounded set. This proves Assumption 1 (A1).
Moreover, since f C 2 ( R n ) and M is bounded, the gradient f is Lipschitz continuous on any closed neighborhood N containing M , which establishes Assumption 1 (A2). □
Lemma 1. 
Under the assumptions of Proposition 2, there exist real numbers m, M, satisfying
0 < m 1 M ,
such that the function f ( u ) has a unique minimizer u * and
m v 2 v T 2 f ( u ) v M v 2 , u , v R n ;
1 2 m u u * 2 f ( u ) f ( u * ) 1 2 M u u * 2 , u R n ;
m u v 2 ( g ( u ) g ( v ) ) T ( u v ) M u v 2 , u , v R n .
Theorem 1 examines the convergence of the F B B iterative sequence.
Theorem 1. 
Assuming that the conditions ( A 1 ) and ( A 2 ) are satisfied, and that f : R n R is a uniformly convex function, the F B B sequence induced by (18) satisfies
f ( x k F B B ) f ( x k + 1 F B B ) μ ν k g k 2 ,
such that
μ ν k = min δ ν k χ L 2 , δ ( 1 δ ) L π .
Proof. 
The update from the F B B iteration is given by x k + 1 F B B = x k F B B ν k t k λ k B B g k , as presented in Equation (18). This follows the general pattern outlined in (2), where d k = ν k λ k B B g k . From the stopping criterion used in Algorithm 1, it follows
f ( x k F B B ) f ( x k + 1 F B B ) δ t k g k T d k , k N .
In the situation t k < 1 , using (39) with d k = ν k λ k B B g k , one obtains
f ( x k F B B ) f ( x k + 1 F B B ) δ t k g k T d k = δ t k g k T ν k λ k B B g k .
Equation (32) indicates that
t k π ( 1 δ ) L · g k T d k d k 2 = π ( 1 δ ) L · g k T ν k λ k B B g k ν k λ k B B g k 2   = π ( 1 δ ) L · ν k λ k B B g k 2 ν k 2 ( λ k B B ) 2 g k 2   = π ( 1 δ ) L · 1 ν k λ k B B .
Now, combining (40) with the previous inequality, we obtain
f ( x k F B B ) f ( x k + 1 F B B ) δ t k ν k λ k B B g k 2   δ π ( 1 δ ) L · 1 ν k λ k B B ν k λ k B B g k 2   δ ( 1 δ ) π L g k 2 .
In the case t k = 1 , using the fact s k 1 T y k 1 > 0 , λ k B B 1 λ k B B 2 from [39], (27), and (31), we have
λ k B B 1 λ k B B 2 = s k 1 T y k 1 y k 1 T y k 1 = s k 1 T y k 1 y k 1 2 χ s k 1 2 y k 1 2 χ s k 1 2 L 2 s k 1 2 = χ L 2 .
From Equations (39) and (42), we derive
f ( x k F B B ) f ( x k + 1 F B B ) δ g k T d k = δ g k T ( ν k λ k B B g k )   = δ ν k λ k B B g k 2   δ ν k χ L 2 g k 2 .
Finally, from Equations (41) and (43), we obtain
f ( x k F B B ) f ( x k + 1 F B B ) min δ ν k χ L 2 , δ ( 1 δ ) L π g k 2 ,
which confirms (37) and (38). □
In Theorem 2, we demonstrate the linear convergence of the F B B method on uniformly convex functions.
Theorem 2. 
If the objective function f : R n R is twice continuously differentiable and uniformly convex on R n , then the series { x k F B B } generated by Algorithm 2 fulfills
lim k g k F B B = 0 .
This implies that the sequence { x k F B B } converges to x * at least linearly.
Proof. 
According to Lemma 1, we know that the objective function f is bounded below and that f ( x k F B B ) decreases. Therefore, it is evident that
lim k f ( x k F B B ) f ( x k + 1 F B B ) = 0 .
By applying estimations (37) and (38) from Theorem 1, we obtain
lim k g k F B B = 0 .
We need to prove that the lim k x k F B B x * = 0 . In other words, we must demonstrate that the sequence { x k F B B } defined by (18) converges to x * .
We substitute x * = v into the inequality (36) and apply the mean value theorem along with the Cauchy–Schwartz inequality to draw conclusions
m u x * 2 g ( u ) M u x * 2 , u R n .
Estimation (45) and the inequality (35) lead to
μ ν k g ( x k F B B ) 2 μ ν k m 2 x k F B B x * 2 2 μ ν k m 2 M f ( x k F B B ) f ( x * ) k 0 .
As a result, it follows that lim k x k F B B x * = 0 , which indicates the convergence of the sequence { x k F B B } to x * .
To finalize the proof, we must establish that the sequence { x k F B B } is generated by Algorithm 2, linearly convergent under the condition L M 2 , where L defined by (27) in Assumption 1, and M is specified by (33) in Lemma 1.
To demonstrate the linear convergence, it is essential to show
ρ = 2 μ ν k m 2 M < 1 .
Since the parameter μ ν k in (38) depends on two values, there are two cases.
In the case where μ ν k = δ ν k χ L 2 , using 0.5 ν k 2 in (24) and χ L in (31), we derive
ρ 2 = 2 μ ν k m 2 M = 2 δ ν k χ L 2 m 2 M 2 δ ν k L L 2 m 2 M = 2 δ ν k L m 2 M 2 δ < 1 .
Similarly, in the case μ ν k = δ ( 1 δ ) L π , it follows that
ρ 2 = 2 μ ν k m 2 M = 2 π δ ( 1 δ ) L m 2 M < m 2 L M < 1 ,
because from (33) and assumptions L M 2 , inequality m M L holds.
Applying Theorem 4.1 from [38], we get
x k F B B x * 2 f ( x 0 F B B ) f ( x * ) m 1 ρ 2 k ,
thus completing the proof. □

4. Numerical Experience

We will evaluate the numerical efficiency of the proposed FBB and FHBB methods.
Seven methods will be compared using standard test functions, with given initial points sourced from [40,41], involving FBB1, FBB2, FHBB1, and FHBB2 and well-known HBB1 and HBB2 methods from [33] and the FMSM method from [30]. The comparison will focus on the following three criteria: the number of iterative steps (NIs), the number of function evaluations (NFEs), and the CPU time in seconds. Ten test problems are selected for evaluation with problem dimensions set at [100, 500, 1000, 2000, 3000, 5000, 7000, 8000, 10,000, 15,000]. The tests will be conducted in MATLAB R2017a and a LAP’s (Intel(R) Core(TM) i3-7020U, up to 2.30 GHz, 8 GB Memory) with the Windows 10 Education operating system.
All algorithms use the stopping criterion
| f ( x k + 1 ) f ( x k ) | 1 + | f ( x k ) | ζ and g k ϵ ,
where ζ = 10 16 and ϵ = 10 6 . The stopping criteria (46) are chosen to ensure that the algorithms terminate only when the iterates are sufficiently close to a stationary point. These strict tolerances help prevent premature termination and enable a reliable comparison of convergence behavior among different variants of the method. Algorithms FBB1, FBB2, and FMSM are tested using backtracking line search with parameters δ = 0.0001 and π = 0.8 , while algorithms HBB1, HBB2, FHBB1, and FHBB2 are tested using backtracking line search and the modified backtracking line search with parameters δ = 0.01 and π = 0.5 . The neutrosophic logic system parameters for the FBB1, FBB2, FHBB1, and FHBB2 methods are defined as ( c 1 = 0.2 , c 2 = 0.5 ,   σ = 1 2 π 0.398942280 ), while the parameters used in the HBB1, HBB2, and FMSM algorithms are set to be the same as those in the related paper [30,33].
A comprehensive sensitivity analysis on the choice of c 1 , c 2 , σ is planned as future research. All numerical experiments in current research employ the primary setting ( 1 ) .
A summary of numerical results from the competition between FBB1, FBB2, HBB1, HBB2, FHBB1, FHBB2, and FMSM is presented after testing 27 test functions (270 tests) by monitoring criteria such as NI, NFE, and CPU time, as arranged in Table 1, Table 2 and Table 3.
An analysis of the results presented in Table 1, Table 2 and Table 3 reveals that the FBB1, FBB2, HBB1, HBB2, FHBB1, FHBB2, and FMSM methods successfully solved all 27 test problems. Furthermore, a closer examination of these results in Table 1, Table 2 and Table 3 highlights the performance of each method in terms of NI, NFE, and CPU time. It was found that the FHBB1 method solved 18.52% (5 out of 27) of all test functions in the conducted experiments with the fewest iterations compared to FBB1, FBB2, HBB1, HBB2, FHBB1, and FMSM, which resolved only 0% (0 out of 27), 22.22% (6 out of 27), 14.81% (4 out of 27), 11.11% (3 out of 27), 3.70% (1 out of 27), and 3.70% (1 out of 27), respectively. Additionally, it is noteworthy that 14.81% of the problems (4 out of 27) were solved with the same minimum number of iterations by both the FHBB1 and FHBB2 methods, and 3.70% of the problems (1 out of 27) were solved with the same minimum number of iterations using four methods (HBB1, HBB2, FHBB1, FHBB2). In total, the FHBB1 method solved 37.04% of the problems (10 out of 27) with the minimum number of iterations.
The result analysis shows that the FHBB1 method solved 22.22% (6 out of 27) of all problems with the minimum number of function evaluations compared to the FBB1, FBB2, HBB1, HBB2, FHBB2, and FMSM methods, which recorded 0% (0 out of 27), 7.41% (2 out of 27), 25.93% (7 out of 27), 11.11% (3 out of 27), 7.41% (2 out of 27), and 0% (0 out of 27), respectively. Furthermore, Table 2 shows that 14.81% (4 out of 27) of the problems were solved with an equal least number of function evaluations by the two (FHBB1 and FHBB2) methods in the experiments, and 3.70% of the problems (1 out of 27) were solved an equal least number of function evaluations using four (HBB1, HBB2, FHBB1, FHBB2) methods. All in total the FHBB1 method solved 40.74% of the problems (11 out of 27) with the minimum number of function evaluations.
The summarized results in Table 3 indicate that the FHBB1 method successfully solved 55.56% of the problems (15 out of 27) in the shortest CPU time. In contrast, the FBB1, FBB2, HBB1, HBB2, FHBB2, and FMSM methods are solved 0% (0 out of 27), 14.81% (4 out of 27), 22.22% (6 out of 27), 3.70% (1 out of 27), 0% (0 out of 27), and 0% (0 out of 27), respectively. One problem, that is, 3.70% of the problems was solved in the shortest CPU time using the two (FBB2 and FHBB1) methods. In total, the FHBB1 method solved 59.26% of the problems (16 out of 27) with the minimum processing time.
The performance profiles presented in [42] are utilized to compare considered methods for the criteria NFE, CPU time, and NI in relation to FBB1, FBB2, HBB1, HBB2, FHBB1, FHBB2, and FMSM methods. Interpreting these profiles is simple: a higher curve signifies better algorithmic performance; the method corresponding to the upper performance profile graph is considered the winner [42].
In Figure 3 and Figure 4, we compare the performance profiles of NI, NFE, and CPU time for the FBB1, FBB2, HBB1, HBB2, FHBB1, FHBB2, and FMSM methods based on quantities provided in Table 1, Table 2 and Table 3. From Figure 3a,b and Figure 4, it is obvious that the FHBB1 graph reaches the value of one first, indicating that FHBB1 outperforms FBB1, FBB2, HBB1, HBB2, FHBB2, and FMSM methods with respect to NI, NFE, and CPU time.
The numerical results indicate that the BB gradient methods based on the dynamic neutrosophic decision achieve better results. While the neutrosophic approach introduces additional calculations for determining the input parameters based on three consecutive function values, the overall increase in computational cost is minimal compared to the BB method. Our numerical experiments show that the extra operations do not significantly affect the total runtime, as the dominant cost remains the function evaluations. This leads to the conclusion that incorporating dynamic neutrosophic sets into gradient methods substantially enhances numerical outcomes.

4.1. Image Restoration via FBB and FHBB Methods

Image restoration is a fundamental problem in image processing, where the goal is to reconstruct an unknown clean image x R n × n from a noisy observation b. In the additive Gaussian noise model, we have
b = x + η , η N ( 0 , σ 2 ) ,
where η represents white Gaussian noise with variance σ 2 .
Over the years, a wide range of denoising approaches has been proposed. Classical filtering methods, such as Gaussian smoothing or median filtering, are computationally efficient but tend to oversmooth edges and remove fine details. More advanced statistical and transform-based methods include wavelet thresholding, sparse coding, and non-local means (NLMs), which exploit self-similarity in images to improve restoration quality. More recently, learning-based approaches, especially deep neural networks, have achieved state-of-the-art performance, but they require large amounts of training data and often lack interpretability and stability guarantees. Deep neural network denoisers such as DnCNN often achieve excellent quantitative performance by learning complex nonlinear mappings from large training datasets. However, these models require substantial training effort and GPU resources, and their performance may degrade when the noise statistics differ from the training data. In contrast, variational TV-based methods provide transparent control of regularization and stable behavior across noise levels without training. For these reasons, and given the optimization-focused scope of this work, we restrict our comparisons to classical variational methods while acknowledging the strong empirical results of DnCNN-type approaches.
In contrast to these data-driven approaches, variational methods provide a principled framework for image denoising by explicitly formulating an optimization problem that balances data fidelity and regularization. A widely adopted model is based on total variation (TV) regularization, introduced by Rudin, Osher, and Fatemi in [43]. The TV regularizer promotes piecewise smooth reconstructions while preserving important edge structures, which are crucial in natural images. Unlike quadratic smoothness priors, TV regularization avoids excessive blurring across edges, making it particularly effective for images with sharp transitions.
A common variational formulation for image denoising is given by
min x 1 2 x b 2 2 + λ T V ( x ) ,
where T V ( x ) denotes the total variation of the image, and λ > 0 is a regularization parameter that balances data fidelity and smoothness. The TV term is defined as
T V ( x ) = i , j ( x i + 1 , j x i , j ) 2 + ( x i , j + 1 x i , j ) 2 .
In summary, the TV-based denoising framework provides a robust and interpretable mechanism for image restoration. It effectively preserves important edge structures by penalizing oscillatory components while allowing sharp discontinuities, which is crucial for maintaining natural image details. At the same time, the regularization efficiently suppresses Gaussian noise without introducing pronounced artifacts, ensuring visually smooth yet accurate reconstructions. Furthermore, TV-based models are grounded in solid mathematical principles, offering interpretability and analytical transparency that data-driven approaches often lack. Finally, the same variational formulation exhibits strong generality and can be naturally extended to a wide range of inverse imaging problems, including deblurring, inpainting, and compressed sensing.

4.1.1. Application to Image Denoising

The denoising problem (48) can be solved by applying proposed FBB/FHBB to the objective function
f ( x ) = 1 2 x b 2 2 + λ T V ( x ) .
At each iteration, the gradient is computed as
f ( x ) = ( x b ) + λ T V ( x ) ,
where T V ( x ) is obtained from the divergence of normalized gradients.
In practice, we will demonstrate the effectiveness of these algorithms on standard test images, such as Barbara, baboon, and other benchmark datasets. Different levels of Gaussian noise σ { 0.03 , 0.05 , 0.08 , 0.1 } and a range of regularization parameters λ will be considered.

4.1.2. Comparative Analysis

We first introduce two variants of the FBB method, which differ in how the step-size parameter ν k is applied during the iterations. Global FBB computes a single scalar value of ν k for the entire image. This approach is simple and computationally efficient, providing stable convergence and consistent restoration quality, but it may oversmooth fine details, especially in regions with high noise or complex textures. Pixel-wise FBB, in contrast, computes a spatially varying step size ν k ( i , j ) , allowing the algorithm to adapt to the local properties of each pixel. This enhances edge and texture preservation, while maintaining fine structures, at the cost of higher computational complexity and potentially more complex stability behavior.
In the experiments, we compare these two variants along with the previously mentioned FHBB method. In the FHBB algorithm, a slight modification was made by omitting h from the descent step, using d = λ g for improved stability and consistency with the implementation. We also evaluate the influence of the two standard BB step-size rules (10) and (11), assessing their impact on convergence speed, numerical stability, and restoration quality within both global and pixel-wise settings.
Performance is assessed using PSNR and SSIM as quality metrics, along with the number of iterations and CPU time measured across multiple runs and noise realizations. This evaluation provides a comprehensive view of the trade-offs between global and pixel-wise strategies, the effect of BB step-size variants, and the overall performance of all three algorithms in denoising tasks.
Before proceeding to detailed performance comparisons, we first analyze the relationship between the parameters σ (noise level) and λ (regularization weight). The goal is to identify suitable λ values for each level of noise, such that the reconstructed images achieve optimal visual and quantitative quality across all algorithms. We evaluate three methods: FBB Pixel, FBB Global, and FHBB. Initial experiments indicate that the optimal values of λ are typically close to the corresponding noise level σ , especially for smaller σ . Based on this observation, for each σ , we tested three to four nearby values of λ (slightly smaller, equal, and slightly larger). All methods were tested on standard benchmark images—Barbara, baboon, pepper, boats, and others. In addition to the basic stopping criterion based on a maximum number of iterations, an additional condition was included to ensure convergence stability, defined as
x k x k 1 x k 1 < 10 3 and | f k f k 1 | | f k 1 | < 10 3 ,
where x k and f k denote the current iterate and objective-function value, respectively. The detailed numerical results are summarized in Table 4.
Based on Table 4, we further analyze the optimal selection of the regularization parameter λ for each fixed noise level σ . For every block of results corresponding to a specific σ , several values of λ were tested, and the quantitative metrics (PSNR, SSIM, number of iterations, and computational time) were recorded for all three algorithms. Within each block, the best-performing entries—highlighted in bold—represent the most favorable trade-off between denoising quality and computational efficiency for that particular σ .
By examining the highlighted results for each fixed noise level σ , a nearly linear relationship between σ and the optimal regularization parameter λ can be observed. For lower noise levels, the best results are typically obtained when λ σ , while for higher σ values, the optimal λ tends to stabilize around 0.08 0.10 . This behavior is consistent with the expected characteristics of TV-based regularization, where stronger noise requires slightly higher regularization to balance smoothness and detail preservation.
Optimal Parameter Pairs. From the highlighted results in Table 4, the following ( σ , λ ) pairs were identified as empirically optimal across the tested images and algorithms:
( σ , λ ) { ( 0.03 , 0.03 ) , ( 0.05 , 0.05 ) , ( 0.08 , 0.06 ) , ( 0.1 , 0.08 ) , ( 0.12 , 0.08 ) , ( 0.15 , 0.08 ) } .
These values will be used in the subsequent analysis to further investigate the convergence behavior and robustness of the proposed methods under different noise conditions.
After identifying the optimal ( σ , λ ) pairs, we proceed to compare the three FBB methods under a fixed number of iterations. The motivation for this analysis stems from the observation that all three methods eventually reach a similar target accuracy and saturate in terms of restoration quality; the primary difference lies in the number of iterations required to achieve that accuracy. For instance, one method may reach near-optimal performance in 30 iterations, while another may require 100 iterations. Clearly, the method reaching the desired quality in fewer iterations is more efficient.
To investigate this, we fix the number of iterations at several predefined values, namely 10, 20, 30, 50, and 80, and compare the performance of all three methods. For this initial comparison, we use the same step-size rule ( B B 1 ) for all methods; the effect of B B 2 will be examined in subsequent experiments. For each selected ( σ , λ ) pair, two random images are chosen from our benchmark set, and the algorithms are run for the specified iteration counts. Performance is evaluated using standard metrics such as PSNR, SSIM, and CPU time, allowing a direct comparison of efficiency and restoration quality across the different methods (see Table 5).
Based on the results presented in Table 5, it can be clearly observed that for smaller ( σ , λ ) pairs, the first algorithm FBB Pixel consistently outperforms the others. For each row in the table, the best-performing values among the three algorithms are highlighted in bold, considering PSNR, SSIM, and Time as evaluation criteria. It is evident that for σ 0.05 , the FBB Pixel method achieves superior reconstruction quality, reflected in higher PSNR and SSIM values. Conversely, for higher noise levels and, consequently, larger regularization weights λ , the third algorithm FHBB demonstrates the best overall performance across all evaluated metrics (PSNR, SSIM, and Time). Indeed, most of the bolded results in the table correspond to this method, indicating its robustness and efficiency under more challenging noise conditions. Hence, additional statistical testing is unnecessary, as the superiority of the FHBB algorithm is clearly evident from the presented data.
Another noteworthy observation is that the FBB Global variant generally achieves shorter computational times compared to FBB Pixel, whereas the latter attains higher reconstruction accuracy in terms of PSNR and SSIM. This outcome is consistent with expectations, as the pixel-wise strategy enables finer local adaptation to spatial variations in noise and image structure, albeit at a higher computational cost.
Furthermore, when the same experiments are repeated using the alternative B B 2 step-size rule, similar trends are observed across all tested noise levels. The relative performance ranking among the three algorithms remains stable, with FHBB maintaining its leading position. To maintain clarity and avoid unnecessary redundancy, the detailed B B 2 tables are omitted here. Instead, the subsequent section provides a focused comparison of the FHBB and FBB Pixel algorithms under both B B 1 and B B 2 formulations, aiming to identify which configuration offers the most favorable balance between convergence rate, reconstruction accuracy, and computational efficiency.

4.1.3. Comparative Analysis of B B 1 and B B 2 Step-Size Rules for FHBB and FBB Pixel Variants

Since the FHBB and FBB Pixel variants exhibited the most favorable overall performance—the former for smaller values of σ and the latter for larger ones—we further compare each of these two algorithms under different selections of the BB parameter, specifically B B 1 and B B 2 , given by Formulas (10) and (11).
Table 6 summarizes the comparative performance of the FHBB algorithm for the two BB configurations, whereas Table 7 reports the corresponding analysis for the FBB Pixel variant. In accordance with the previous observations, slightly higher σ values were employed for the FHBB experiments, while lower σ values were chosen for the FBB Pixel case to ensure consistency across evaluations.
For each ( σ , λ ) pair, two random images from the test set were selected. From the results presented in Table 6, no definitive conclusion can be drawn regarding the superiority of F H B B 1 or F H B B 2 . Nevertheless, a consistent tendency can be identified: F H B B 1 achieves marginally better performance for smaller σ values, whereas F H B B 2 becomes more effective as σ increases (see Table 6).
A comparable pattern is observed in Table 7, where neither F B B 1 Pixel nor F B B 2 Pixel decisively outperforms the other. However, the trend is reversed compared to the previous analysis—the F B B 2 Pixel variant shows a slight advantage for smaller values of σ , while the F B B 1 Pixel variant exhibits clear superiority for larger σ . Overall, a modest yet consistent advantage can be attributed to the F B B 1 Pixel variant when σ is high.

4.1.4. Visual Comparison of Reconstruction Quality

To complement the quantitative evaluation presented in the previous subsections, we provide a visual comparison of the reconstructed images obtained using the three algorithms—FBB Pixel, FBB Global, and FHBB. The aim of this analysis is to highlight the qualitative behavior of each approach in terms of texture recovery, edge sharpness, and visual consistency under representative parameter settings.
Figure 5 illustrates the results for a moderate noise configuration with parameters σ = 0.12 , λ = 0.08 , and num _ iters = 30 . This setup corresponds to the first Barzilai–Borwein step-size rule determined by (10). As confirmed by the quantitative metrics, the FHBB method provides the visually most convincing reconstructions, preserving image structure while effectively removing Gaussian noise. We observe that the FBB Pixel variant slightly outperforms the FBB Global approach in terms of both PSNR and SSIM, which is consistent with the quantitative analysis presented earlier. This behavior is expected, as the pixel-wise formulation allows local step-size adaptation, enabling better preservation of fine details and sharp transitions within the image. Nevertheless, both variants demonstrate stable performance and comparable overall denoising quality, confirming the consistency of the FBB framework across different parameter regimes.
A similar trend is observed for the second configuration shown in Figure 6, where the noise level is increased to σ = 0.15 , while λ = 0.08 and num _ iters = 30 are kept fixed.
The second Barzilai–Borwein rule (11) is used in this case. The higher noise intensity further accentuates the differences between the methods. Once again, FHBB demonstrates superior visual quality, maintaining structural details and preventing oversmoothing, consistent with its performance observed for quantitative measures. The FBB Pixel and FBB Global methods exhibit alternating advantages depending on the local image characteristics, confirming that their performance is image-dependent and complementary rather than hierarchical.
Overall, the visual analysis aligns with the conclusions drawn from the quantitative evaluation: the FHBB algorithm consistently outperforms both FBB Pixel and FBB Global for both Barzilai–Borwein step-size schemes, demonstrating robustness and superior visual fidelity. The observed variations between the two FBB variants are expected and reflect the trade-offs between localized adaptation and global smoothness inherent to their formulations.

5. Conclusions and Discussion of Further Research

We propose the inclusion of an additional parameter in the step size of the BB gradient methods for solving unconstrained optimization problems. This corrective parameter is derived from a suitably defined neutrosophic logic system. As a result, we introduce a composite step size for the BB iterations that incorporates the corrective parameter, enabling the method to adapt based on previous iteration behavior. Theoretical analysis demonstrates that the proposed iterations converge under the same conditions as the original methods. Furthermore, numerical comparisons indicate that the proposed methods outperform the standard BB variants according to Dolan–Moré performance profiles, particularly in terms of CPU time, the number of function evaluations, and the number of iterations required. In addition, experimental results confirm that the proposed methods perform well on practical applications such as image restoration, particularly in image denoising tasks.
The use of a backtracking line search guarantees the global convergence of the FBB method; however, it substantially increases CPU time and the number of function evaluations. To reduce both CPU time and the number of function evaluations in the iterative scheme (18), future research will explore a variant that omits the parameter t k . This modification leads to a new iterative scheme of the form
x k + 1 = x k ν k λ k B B g k .
The iterative scheme (52) relies on two parameters, ν k and λ k B B , whereas the original scheme (18) depends on three. This difference provides greater flexibility in defining the fuzzy parameter ν k . The method (52), referred to as the modified BB gradient method based on the neutrosophic logic system (MFBB), will be the subject of future investigation.
Additionally, a promising direction for future research is to conduct a systematic sensitivity analysis of the neutrosophic parameters ( c 1 , c 2 , σ ) . Such an investigation could provide deeper insight into their influence on the performance of neutrosophic-based gradient methods and help further optimize the proposed framework.

Author Contributions

Conceptualization, P.S.S. and D.S.; Methodology, P.S.S. and D.S.; Software, B.D.I. and M.M.; Validation, P.S.S., B.D.I., M.M. and D.S.; Formal analysis, P.S.S. and B.D.I.; Investigation, M.M.; Writing—original draft, B.D.I., M.M. and D.S.; Writing—review & editing, P.S.S., M.M. and D.S.; Visualization, B.D.I. and M.M.; Supervision, P.S.S., B.D.I. and M.M. All authors have read and agreed to the published version of the manuscript.

Funding

Predrag S. Stanimirović, Branislav D. Ivanov and Marko Miladinović are supported from the Ministry of Science, Technological Development and Innovation, Republic of Serbia, Grant 451-03-137/2025-03/200124. Predrag S. Stanimirović is supported by the Ministry of Science and Technology of China under grant H20240841.

Institutional Review Board Statement

Not applicable.

Informed Consent Statement

Not applicable.

Data Availability Statement

Data and code will be provided on request to authors.

Conflicts of Interest

The authors declare no conflicts of interest.

References

  1. Nocedal, J.; Wright, J. Numerical Optimization; Springer: New York, NY, USA, 1999. [Google Scholar]
  2. Sun, W.; Yuan, Y.-X. Optimization Theory and Methods: Nonlinear Programming; Springer: Berlin/Heidelberg, Germany, 2006. [Google Scholar]
  3. Brezinski, C. A classification of quasi-Newton methods. Numer. Algorithms 2003, 33, 123–135. [Google Scholar] [CrossRef]
  4. Djordjević, S.S. Two modifications of the method of the multiplicative parameters in descent gradient methods. Appl. Math. Comput. 2012, 218, 8672–8683. [Google Scholar] [CrossRef]
  5. Ivanov, B.; Stanimirović, P.S.; Milovanović, G.V.; Djordjević, S.; Brajević, I. Accelerated multiple step-size methods for solving unconstrained optimization problems. Optim. Methods Softw. 2021, 36, 998–1029. [Google Scholar] [CrossRef]
  6. Petrović, M.J. An accelerated Double Step Size method in unconstrained optimization. Appl. Math. Comput. 2015, 250, 309–319. [Google Scholar]
  7. Stanimirović, P.S.; Miladinović, M.B. Accelerated gradient descent methods with line search. Numer. Algorithms 2010, 54, 503–520. [Google Scholar] [CrossRef]
  8. Stanimirović, P.S.; Milovanović, G.V.; Petrović, M.J. A transformation of accelerated double step size method for unconstrained optimization. Math. Probl. Eng. 2015, 2015, 283679. [Google Scholar] [CrossRef]
  9. Barzilai, J.; Borwein, J.M. Two-point step size gradient method. IMA J. Numer. Anal. 1988, 8, 141–148. [Google Scholar] [CrossRef]
  10. Raydan, M. On the Barzilai and Borwein choice of steplength for the gradient method. IMA J. Numer. Anal. 1993, 13, 321–326. [Google Scholar] [CrossRef]
  11. Dai, Y.H.; Liao, L.Z. R-linear convergence of the Barzilai and Borwein gradient method. IMA J. Numer. Anal. 2002, 22, 1–10. [Google Scholar] [CrossRef]
  12. Raydan, M. The Barzilai and Borwein gradient method for the large scale unconstrained minimization problem. SIAM J. Optim. 1997, 7, 26–33. [Google Scholar] [CrossRef]
  13. Dai, Y.H.; Yuan, J.Y.; Yuan, Y. Modified two-point step-size gradient methods for unconstrained optimization. Comput. Optim. Appl. 2002, 22, 103–109. [Google Scholar] [CrossRef]
  14. Dai, Y.H.; Yuan, Y.X. Alternate step gradient method. Optimization 2003, 52, 395–415. [Google Scholar] [CrossRef]
  15. Dai, Y.H.; Yuan, Y. Alternate minimization gradient method. IMA J. Numer. Anal. 2003, 23, 377–393. [Google Scholar] [CrossRef]
  16. Dai, Y.H.; Hager, W.W.; Schittkowski, K.; Zhang, H. The cyclic Barzilai–Borwein method for unconstrained optimization. IMA J. Numer. Anal. 2006, 26, 604–627. [Google Scholar] [CrossRef]
  17. Raydan, M.; Svaiter, B.F. Relaxed steepest descent and Cauchy-Barzilai-Borwein method. Comput. Optim. Appl. 2002, 21, 155–167. [Google Scholar] [CrossRef]
  18. Liu, P.; Yuan, Z.; Zhuo, Y.; Shao, H. Two efficient spectral hybrid CG methods based on memoryless BFGS direction and Dai–Liao conjugacy condition. Optim. Methods Softw. 2024, 39, 1445–1463. [Google Scholar] [CrossRef]
  19. Shao, F.; Shao, H.; Wu, B.; Wang, H.; Liu, P.; Liu, M. A conjugate gradient algorithmic framework for unconstrained optimization with applications: Convergence and rate analyses. Appl. Numer. Math. 2026, 220, 13–28. [Google Scholar] [CrossRef]
  20. Li, L.; Xie, P.; Zhang, L. A novel numerical method tailored for unconstrained optimization problems. arXiv 2025, arXiv:2504.02832. [Google Scholar]
  21. Lima-Junior, F.R. Advances in Fuzzy Logic and Artificial Neural Networks. Mathematics 2024, 12, 3949. [Google Scholar] [CrossRef]
  22. Christianto, V.; Smarandache, F. A Review of Seven Applications of Neutrosophic Logic: In Cultural Psychology, Economics Theorizing, Conflict Resolution, Philosophy of Science, etc. Multidiscip. Sci. J. 2019, 2, 128–137. [Google Scholar] [CrossRef]
  23. Zadeh, L.A. Fuzzy sets. Inf. Control 1965, 8, 338–353. [Google Scholar] [CrossRef]
  24. Atanassov, K.T. Intuitionistic fuzzy sets. Fuzzy Sets Syst. 1986, 20, 87–96. [Google Scholar] [CrossRef]
  25. Smarandache, F. A Unifying Field in Logics, Neutrosophy: Neutrosophic Probability, Set and Logic; American Research Press: Rehoboth, NM, USA, 1999. [Google Scholar]
  26. Smarandache, F. Neutrosophic Logic—A Generalization of the Intuitionistic Fuzzy Logic. 25 January 2016. Available online: https://ssrn.com/abstract=2721587 (accessed on 23 October 2025). [CrossRef]
  27. Wang, H.; Smarandache, F.; Zhang, Y.Q.; Sunderraman, R. Single valued neutrosophic sets. Multispace Multistruct. 2010, 4, 410–413. [Google Scholar]
  28. Shi, Z.-J. Convergence of qusi–Newton method with new inexact line search. J. Math. Anal. Appl. 2006, 315, 120–131. [Google Scholar] [CrossRef][Green Version]
  29. Sun, Q.Y.; Liu, X.H. Global convergence results of a new three–term memory gradient method. J. Oper. Res. Soc. Jpn. 2004, 47, 63–72. [Google Scholar]
  30. Stanimirović, P.S.; Ivanov, B.; Stanujkić, D.; Katsikis, V.N.; Mourtas, S.D.; Kazakovtsev, L.A.; Edalatpanah, S.A. Improvement of unconstrained optimization methods based on symmetry involved in neutrosophy. Symmetry 2023, 15, 250. [Google Scholar] [CrossRef]
  31. Stanimirović, P.S.; Ivanov, B.; Stanujkić, D.; Kazakovtsev, L.A.; Krutikov, V.N.; Karabašević, D. Fuzzy adaptive parameter in the Dai–Liao optimization method based on neutrosophy. Symmetry 2023, 15, 1217. [Google Scholar] [CrossRef]
  32. Andrei, N. An acceleration of gradient descent algorithm with backtracking for unconstrained optimization. Numer. Algorithms 2006, 42, 63–73. [Google Scholar] [CrossRef]
  33. Gao, J.; Ou, Y. A hybrid BB–type method for solving large scale unconstrained optimization. J. Appl. Math. Comput. 2023, 69, 2105–2133. [Google Scholar] [CrossRef]
  34. Brown, A.A.; Bartholomew-Biggs, M.C. Some effective methods for unconstrained optimization based on the solution of systems of ordinary differential equations. J. Optim. Theory. Appl. 1989, 62, 211–224. [Google Scholar] [CrossRef]
  35. Andrei, N. Relaxed Gradient Descent and a New Gradient Descent Methods for Unconstrained Optimization. Available online: https://camo.ici.ro/neculai/newgrad.pdf (accessed on 1 October 2025).
  36. Ortega, J.M.; Rheinboldt, W.C. Iterative Solution of Nonlinear Equation in Several Variables; Academic Press: New York, NY, USA; London, UK, 1970. [Google Scholar]
  37. Rockafellar, R.T. Convex Analysis; Princeton University Press: Princeton, NJ, USA, 1970. [Google Scholar]
  38. Shi, Z.-J. Convergence of line search methods for unconstrained optimization. App. Math. Comput. 2004, 157, 393–405. [Google Scholar] [CrossRef]
  39. Crisci, S.; Porta, F.; Ruggiero, V.; Zanni, L. Spectral properties of Barzilai–Borwein rules in solving singly linearly constrained optimization problems subject to lower and upper bounds. SIAM J. Optim. 2020, 30, 1300–1326. [Google Scholar] [CrossRef]
  40. Andrei, N. An unconstrained optimization test functions collection. Adv. Model. Optim. 2008, 10, 147–161. [Google Scholar]
  41. Bongartz, I.; Conn, A.R.; Gould, N.; Toint, P.L. CUTE: Constrained and unconstrained testing environments. ACM Trans. Math. Softw. 1995, 21, 123–160. [Google Scholar] [CrossRef]
  42. Dolan, E.D.; Moré, J.J. Benchmarking optimization software with performance profiles. Math. Program. 2002, 91, 201–213. [Google Scholar] [CrossRef]
  43. Rudin, L.I.; Osher, S.; Fatemi, E. Nonlinear total variation based noise removal algorithms. Phys. D 1992, 60, 259–268. [Google Scholar] [CrossRef]
Figure 1. Graphs of T ( ϑ ) , F ( ϑ ) , I ( ϑ ) under different parameter settings in (20)–(22).
Figure 1. Graphs of T ( ϑ ) , F ( ϑ ) , I ( ϑ ) under different parameter settings in (20)–(22).
Axioms 15 00011 g001
Figure 2. De-neutrosophication under different parameters settings in (20)–(22).
Figure 2. De-neutrosophication under different parameters settings in (20)–(22).
Axioms 15 00011 g002
Figure 3. NI and NFE performance profiles for FBB1, FBB2, HBB1, HBB2, FHBB1, FHBB2, and FMSM methods.
Figure 3. NI and NFE performance profiles for FBB1, FBB2, HBB1, HBB2, FHBB1, FHBB2, and FMSM methods.
Axioms 15 00011 g003
Figure 4. CPU time performance profiles for FBB1, FBB2, HBB1, HBB2, FHBB1, FHBB2, and FMSM methods.
Figure 4. CPU time performance profiles for FBB1, FBB2, HBB1, HBB2, FHBB1, FHBB2, and FMSM methods.
Axioms 15 00011 g004
Figure 5. Visual comparison of denoising results for σ = 0.12 , λ = 0.08 , num _ iters = 30 , and B B 1 . From left to right: FBB Pixel, FBB Global, and FHBB. The FHBB method achieves the most visually accurate restoration, while FBB Pixel slightly dominates over FBB Global.
Figure 5. Visual comparison of denoising results for σ = 0.12 , λ = 0.08 , num _ iters = 30 , and B B 1 . From left to right: FBB Pixel, FBB Global, and FHBB. The FHBB method achieves the most visually accurate restoration, while FBB Pixel slightly dominates over FBB Global.
Axioms 15 00011 g005
Figure 6. Visual comparison for σ = 0.15 , λ = 0.08 , num _ iters = 30 , and B B 2 . The FHBB method consistently produces the best visual quality, while FBB Pixel and FBB Global alternately perform better depending on image structure and texture distribution.
Figure 6. Visual comparison for σ = 0.15 , λ = 0.08 , num _ iters = 30 , and B B 2 . The FHBB method consistently produces the best visual quality, while FBB Pixel and FBB Global alternately perform better depending on image structure and texture distribution.
Axioms 15 00011 g006
Table 1. Summary test results of FBB1, FBB2, HBB1, HBB2, FHBB1, FHBB2, and FMSM methods for the NI.
Table 1. Summary test results of FBB1, FBB2, HBB1, HBB2, FHBB1, FHBB2, and FMSM methods for the NI.
Test FunctionNo. of Iterations
FBB1 FBB2 HBB1 HBB2 FHBB1 FHBB2 FMSM
Extended Penalty460463459448431438373
Perturbed Quadratic57,88810,846777613,943752314,47458,008
Raydan 265656969606087
Diagonal 232,0205607468485754312928320,764
Hager789663710671677704743
Extended Tridiagonal 190154262324571521584233
Extended TET11180100161110107237
Diagonal 5404060605151105
Extended Himmelblau257238138136241257376
Perturbed quadratic diagonal40,9546127552015,521665917,94340,791
Extended quadratic penalty QP21417870475399817592077
Extended quadratic exponential EP140403434333392
Extended Tridiagonal 2427394384400373363441
ENGVAL1 (CUTE)303300319325289299302
QUARTC (CUTE)1921921661661010216
Diagonal 61201206969919187
Generalized Quartic203191116115171169148
Diagonal 7606060605050114
Diagonal 860605050505081
Extended BD1(Block Diagonal)320267150167277282206
Extended Cliff7362995051426445221517,963
NONDIA (CUTE)222,83726635842848231445378,685
DQDRTIC (CUTE)18608384615678711579687
Extended Freudenstein and Roth548631932292536565319,612
Extended White and Holst12,51266076087691328748552
Extended Beale15674573104355221037905
EDENSCH (CUTE)252256291300240244313
Table 2. Summary test results of FBB1, FBB2, HBB1, HBB2, FHBB1, FHBB2, and FMSM methods for the NFE.
Table 2. Summary test results of FBB1, FBB2, HBB1, HBB2, FHBB1, FHBB2, and FMSM methods for the NFE.
Test FunctionNo. of Funct. Evaluation
FBB1 FBB2 HBB1 HBB2 FHBB1 FHBB2 FMSM
Extended Penalty2064225917161522147919142320
Perturbed Quadratic324,71424,65716,46549,68815,94051,578334,615
Raydan 2160160168168150150231
Diagonal 2175,30713,45511,65531,22810,84033,441119,819
Hager3131177522622181185324253129
Extended Tridiagonal 181,5881862541163435436638,232
Extended TET422310260422284274662
Diagonal 5110110150150132132299
Extended Himmelblau7046953563525625941119
Perturbed quadratic diagonal417,43718,05612,58664,41615,28674,392407,013
Extended quadratic penalty QP234362274264633073437705914,051
Extended quadratic exponential EP1404404178178176176579
Extended Tridiagonal 22155168917561841156615152070
ENGVAL1 (CUTE)2153231615001469121114122942
QUARTC (CUTE)4544543823827070492
Diagonal 6270270168168212212334
Generalized Quartic558519312310402398466
Diagonal 7160160160160140140570
Diagonal 8190190150150150150272
Extended BD1 (Block Diagonal)760664390424584614700
Extended Cliff9796409025989319209114,190146,314
NONDIA (CUTE)2,736,95211,09931241666372482594,555,897
DQDRTIC (CUTE)10,192268911261772202253223130
Extended Freudenstein and Roth39,9182307133736191506381077,132
Extended White and Holst109,712314821603736243412,66764,083
Extended Beale844120247901370137433744433
EDENSCH (CUTE)8888809209667717161207
Table 3. Summary test results of FBB1, FBB2, HBB1, HBB2, FHBB1, FHBB2, and FMSM methods for the CPU time.
Table 3. Summary test results of FBB1, FBB2, HBB1, HBB2, FHBB1, FHBB2, and FMSM methods for the CPU time.
Test FunctionCPU Time (s)
FBB1 FBB2 HBB1 HBB2 FHBB1 FHBB2 FMSM
Extended Penalty1.1561.3911.5630.9840.9381.6411.297
Perturbed Quadratic181.32818.35911.00027.93811.35929.703184.438
Raydan 20.2660.1410.3130.2500.1090.1410.250
Diagonal 2238.39117.90614.35935.59412.67235.031140.297
Hager8.4534.9064.9384.7194.0945.6886.813
Extended Tridiagonal 1240.2195.2502.7664.4691.1722.00081.359
Extended TET0.7660.4060.4380.7340.4060.5161.000
Diagonal 50.3910.3280.5000.5000.3130.4381.047
Extended Himmelblau0.4380.3280.2030.2810.1560.3590.563
Perturbed quadratic diagonal182.31310.7508.56331.9229.85937.109181.406
Extended quadratic penalty QP22.2811.2342.4382.3132.7345.2509.000
Extended quadratic exponential EP10.4220.2190.1880.2810.1720.3440.578
Extended Tridiagonal 21.2340.7660.9691.0311.0160.9221.219
ENGVAL1 (CUTE)1.1251.0311.0310.9690.7340.9531.516
QUARTC (CUTE)3.0782.6881.8752.0630.1250.2662.391
Diagonal 60.4380.2190.2810.2500.2660.3440.313
Generalized Quartic0.4690.4060.2190.2660.2030.3440.359
Diagonal 70.2970.2030.3590.2810.1880.2660.734
Diagonal 80.4380.2340.2810.2500.2030.2810.500
Extended BD1 (Block Diagonal)0.8130.7500.4380.5310.5940.6560.609
Extended Cliff4.4691.5942.0634.8591.3284.90677.969
NONDIA (CUTE)664.3135.2971.7340.8281.8133.1561723.219
DQDRTIC (CUTE)3.8591.0940.8751.1251.0472.1721.578
Extended Freudenstein and Roth9.0310.6720.8131.4530.6881.23423.578
Extended White and Holst209.9388.3595.9068.7196.39126.531188.328
Extended Beale25.4386.8753.2504.8594.59410.09414.141
EDENSCH (CUTE)4.3593.7813.8913.9383.0943.5945.094
Table 4. Quantitative results for the FBB Pixel, Global, and Hybrid methods across different images and parameter settings. For each fixed noise level σ , multiple values of λ were tested, and the best-performing entries within each σ block (in terms of PSNR, SSIM, average iterations (Iter), and time) are highlighted in bold. Horizontal lines separate result groups corresponding to different σ values.
Table 4. Quantitative results for the FBB Pixel, Global, and Hybrid methods across different images and parameter settings. For each fixed noise level σ , multiple values of λ were tested, and the best-performing entries within each σ block (in terms of PSNR, SSIM, average iterations (Iter), and time) are highlighted in bold. Horizontal lines separate result groups corresponding to different σ values.
ImageMax σ λ FBB PixelFBB GlobalFHBB
Iter PSNR SSIM Iter Time PSNR SSIM Iter Time PSNR SSIM Iter Time
0.0133.570.8436.71.2233.640.8456.30.8833.660.84510.71.31
pepper1000.030.0333.910.8529.32.0833.850.858.31.4633.670.84716.72.33
0.0533.310.8399.72.1333.120.8349.31.632.70.81620.32.9
0.0331.800.78932.07.0231.800.78930.05.2331.930.79522.03.03
pepper1000.050.0532.010.80145.39.7231.960.841.76.5931.950.79836.74.33
0.0731.760.79742.78.7231.650.79538.35.6231.380.78656.36.54
0.0628.230.779479.9228.220.77744.37.0828.220.779283.26
barbara1000.080.0827.990.78455.39.4227.970.78461.39.5827.890.78159.38.69
0.1027.710.77864.013.6227.660.77765.09.5027.550.77487.710.87
0.0827.330.74152.010.9127.330.74154.38.2727.330.74342.04.71
barbara1000.100.1027.270.75158.710.1627.270.750476.2127.180.75174.77.82
0.1227.130.75268.312.1427.120.75259.37.9826.950.74889.39.39
0.0823.260.677387.3123.240.67740.75.6023.220.67520.72.42
baboon1000.120.1022.960.65255.39.4622.960.65454.08.2122.890.64827.04.48
0.1222.660.62659.012.5722.630.62562.09.6122.510.61452.76.87
0.1422.390.60064.313.7422.350.59963.08.9522.170.58186.39.32
0.0822.250.62426.75.3922.240.624264.1022.290.62719.32.65
baboon1000.150.1022.850.62246.08.0722.360.62347.76.5222.360.62122.32.68
0.1222.260.60759.010.2422.250.60862.38.3322.210.60329.33.24
Table 5. Quantitative comparison of F B B 1 Pixel, F B B 1 Global, and F H B B 1 methods for fixed iteration counts under the B B 1 step-size rule. Blocks correspond to different ( σ , λ ) settings, with best-performing values highlighted in bold.
Table 5. Quantitative comparison of F B B 1 Pixel, F B B 1 Global, and F H B B 1 methods for fixed iteration counts under the B B 1 step-size rule. Blocks correspond to different ( σ , λ ) settings, with best-performing values highlighted in bold.
Parameters FBB 1 Pixel FBB 1 Global FHBB 1 Hybrid
Image Iter σ λ PSNR SSIM Time PSNR SSIM Time PSNR SSIM Time
10 34.900.9093.334.750.9052.3234.710.9051.94
20 34.870.916.9034.740.9054.6534.690.9054.28
boats300.030.0334.830.90910.9834.700.9058.5834.670.9046.63
50 34.720.90619.0534.630.90411.4234.630.90410.16
10 28.630.8831.7628.530.8811.3828.420.8751.13
200.030.0328.550.8794.3328.480.8783.4228.390.8732.68
baboon30 28.510.8776.8228.450.8765.5728.380.8724.67
50 28.440.87410.828.40.8727.0228.360.8715.86
10 26.070.812.7125.990.8072.4825.820.7961.54
20 25.980.8035.4625.930.8023.6225.770.7912.95
baboon300.050.0525.930.87.5625.890.7994.3825.760.793.49
50 25.860.79512.1825.830.7948.6125.730.7887.14
10 29.810.8411.8129.780.8431.4629.610.841.2
20 29.750.8433.7429.720.8442.8829.570.842.48
barbara300.050.0529.720.8446.5629.70.8445.0629.560.843.94
50 29.650.84310.229.640.8438.1429.540.8396.41
10 29.70.7151.7729.640.7141.5829.880.7291.51
20 29.870.7274.0829.780.7232.9529.980.7362.55
pepper300.080.0629.940.7315.8429.850.7284.59300.7373.85
50 30.030.7388.8729.950.7346.9330.040.746.49
10 28.130.7591.8428.140.7611.428.20.7721.12
20 28.190.773.428.180.7692.8728.220.7782.42
barbara300.080.0628.210.7745.5128.20.7735.2628.220.785.8
50 28.230.7810.5928.220.7797.6228.230.7826.19
10 28.390.6993.5728.270.6982.5528.480.7192.06
20 28.570.7186.5428.410.7124.8428.580.7314.1
boats300.10.0828.630.72510.8228.460.7196.9928.60.7345.57
50 28.690.73414.6928.540.72911.4928.630.7399.12
10 27.810.7482.1227.770.7501.6627.950.7671.38
20 27.930.7653.2727.860.7622.5027.990.7762.27
yacht300.10.0827.970.7704.8927.890.7673.7628.00.7783.25
50 28.010.77710.1327.940.7757.7728.020.7826.29
10 27.160.5992.2227.090.5981.7227.470.6241.43
20 27.420.6194.2827.290.6133.327.620.6362.71
pepper300.120.0827.510.6255.2827.380.6194.127.650.6393.26
50 27.630.6358.5227.510.6296.5327.710.6445.38
10 26.850.6163.6126.730.6132.4127.070.642.3
20 27.060.6356.3926.880.6265.7727.180.6513.99
boats300.120.0827.120.6418.7726.950.6326.7127.20.6545.31
50 27.210.6514.3927.060.64212.2927.250.6598.84
10 24.930.4852.124.90.4861.4925.260.511.32
20 25.140.5013.4325.060.4972.6625.380.5192.2
pepper300.150.0825.220.5065.1125.130.5023.9225.410.5213.19
50 25.330.5148.7125.250.516.725.460.5255.73
80 25.430.52112.2725.390.529.8525.520.5298.33
10 24.240.5412.2124.280.5431.7124.550.5621.49
20 24.410.5543.424.40.5522.6524.630.572.44
barbara300.150.0824.470.5575.124.450.5574.0624.650.5723.42
50 24.550.5658.5624.540.5636.5524.690.5755.25
80 24.660.57315.7824.640.57210.9824.730.5788.97
Table 6. Quantitative results for F H B B 1 and F H B B 2 . Bold values indicate the better result for each metric (PSNR and SSIM) between the two methods.
Table 6. Quantitative results for F H B B 1 and F H B B 2 . Bold values indicate the better result for each metric (PSNR and SSIM) between the two methods.
Parameters FHBB 1 FHBB 2
Image Iter σ λ PSNR SSIM PSNR SSIM
10 26.450.81826.420.818
motobikes200.080.0626.440.81826.40.816
30 26.430.81826.390.816
10 28.20.77228.180.771
barbara200.080.0628.220.77828.220.784
30 28.220.7828.220.785
10 28.480.71928.420.714
boats200.10.0828.580.73128.640.746
30 28.60.73428.670.75
10 27.90.76627.820.76
yacht200.10.0827.960.77627.960.785
30 27.970.77927.960.788
20 27.620.63627.840.655
pepper300.120.0827.650.63927.890.659
50 27.710.64427.930.662
20 27.150.64827.330.669
boats300.120.0827.180.65127.360.672
50 27.220.65627.390.676
20 24.810.59424.950.606
yacht300.150.0824.830.59624.980.609
50 24.860.598250.611
80 24.90.60225.030.613
20 23.30.67223.340.675
motorbikes300.150.0823.30.67323.350.676
50 23.310.67323.350.676
80 23.330.67423.360.676
Table 7. Quantitative results for F B B 1 Pixel and F B B 2 Pixel. Bold values indicate the better result for each metric (PSNR and SSIM) between the two methods.
Table 7. Quantitative results for F B B 1 Pixel and F B B 2 Pixel. Bold values indicate the better result for each metric (PSNR and SSIM) between the two methods.
Parameters FBB 1 Pixel FBB 2 Pixel
Image Iter σ λ PSNR SSIM PSNR SSIM
5 32.110.88832.250.895
barbara100.030.0332.120.90132.240.902
20 32.050.90132.170.902
5 33.690.84333.870.851
pepper100.030.0333.850.84933.910.852
20 33.790.84533.870.849
10 31.440.88131.420.872
yacht200.050.0531.410.88431.450.879
30 31.370.88531.450.883
10 32.320.85432.190.839
boats200.050.0532.340.85932.280.848
30 32.320.86132.320.854
10 26.550.81926.530.814
motorbikes200.080.0626.540.82126.550.817
30 26.530.8226.560.819
10 28.130.75927.950.736
barbara200.080.0628.190.7728.040.747
30 28.210.77428.10.755
Disclaimer/Publisher’s Note: The statements, opinions and data contained in all publications are solely those of the individual author(s) and contributor(s) and not of MDPI and/or the editor(s). MDPI and/or the editor(s) disclaim responsibility for any injury to people or property resulting from any ideas, methods, instructions or products referred to in the content.

Share and Cite

MDPI and ACS Style

Stanimirović, P.S.; Ivanov, B.D.; Miladinović, M.; Stanujkić, D. Improvement of Barzilai and Borwein Gradient Method Based on Neutrosophic Logic System with Application in Image Restoration. Axioms 2026, 15, 11. https://doi.org/10.3390/axioms15010011

AMA Style

Stanimirović PS, Ivanov BD, Miladinović M, Stanujkić D. Improvement of Barzilai and Borwein Gradient Method Based on Neutrosophic Logic System with Application in Image Restoration. Axioms. 2026; 15(1):11. https://doi.org/10.3390/axioms15010011

Chicago/Turabian Style

Stanimirović, Predrag S., Branislav D. Ivanov, Marko Miladinović, and Dragiša Stanujkić. 2026. "Improvement of Barzilai and Borwein Gradient Method Based on Neutrosophic Logic System with Application in Image Restoration" Axioms 15, no. 1: 11. https://doi.org/10.3390/axioms15010011

APA Style

Stanimirović, P. S., Ivanov, B. D., Miladinović, M., & Stanujkić, D. (2026). Improvement of Barzilai and Borwein Gradient Method Based on Neutrosophic Logic System with Application in Image Restoration. Axioms, 15(1), 11. https://doi.org/10.3390/axioms15010011

Note that from the first issue of 2016, this journal uses article numbers instead of page numbers. See further details here.

Article Metrics

Back to TopTop