Next Article in Journal
Bohr Radius Problems for Some Classes of Analytic Functions Using Quantum Calculus Approach
Next Article in Special Issue
Orhonormal Wavelet Bases on The 3D Ball Via Volume Preserving Map from the Regular Octahedron
Previous Article in Journal
Almost Sure Convergence for the Maximum and Minimum of Normal Vector Sequences
Previous Article in Special Issue
Detecting Extreme Values with Order Statistics in Samples from Continuous Distributions
 
 
Font Type:
Arial Georgia Verdana
Font Size:
Aa Aa Aa
Line Spacing:
Column Width:
Background:
Article

On q-Quasi-Newton’s Method for Unconstrained Multiobjective Optimization Problems

by
Kin Keung Lai
1,*,†,
Shashi Kant Mishra
2,† and
Bhagwat Ram
3,†
1
College of Economics, Shenzhen University, Shenzhen 518060, China
2
Department of Mathematics, Institute of Science, Banaras Hindu University, Varanasi 221005, India
3
DST-Centre for Interdisciplinary Mathematical Sciences, Institute of Science, Banaras Hindu University, Varanasi 221005, India
*
Author to whom correspondence should be addressed.
These authors contributed equally to this work.
Mathematics 2020, 8(4), 616; https://doi.org/10.3390/math8040616
Submission received: 1 April 2020 / Revised: 11 April 2020 / Accepted: 13 April 2020 / Published: 17 April 2020
(This article belongs to the Special Issue Numerical Methods)

Abstract

:
A parameter-free optimization technique is applied in Quasi-Newton’s method for solving unconstrained multiobjective optimization problems. The components of the Hessian matrix are constructed using q-derivative, which is positive definite at every iteration. The step-length is computed by an Armijo-like rule which is responsible to escape the point from local minimum to global minimum at every iteration due to q-derivative. Further, the rate of convergence is proved as a superlinear in a local neighborhood of a minimum point based on q-derivative. Finally, the numerical experiments show better performance.

1. Introduction

Multiobjective optimization is the method of optimizing two or more real valued objective functions at the same time. There is no ideal minimizer to minimize all objective functions at once, thus the optimality concept is replaced by the idea of Pareto optimality/efficiency. A point is called Pareto optimal or efficient if there does not exist an alternative point with the equivalent or smaller objective function values, such that there is a decrease in at least one objective function value. In many applications such as engineering [1,2], economic theory [3], management science [4], machine learning [5,6], and space exploration [7], etc., several multiobjective optimization techniques are used to make the desired decision. One of the basic approaches is the weighting method [8], where a single objective optimization problem is created by the weighting of several objective functions. Another approach is the ϵ -constraint method [9], where we minimize only the chosen objective function and keep other objectives as constraints. Some multiobjective algorithms require a lexicographic method, where all objective functions are optimized in their order of priority [10,11]. First, the most preferred function is optimized, then that objective function is transformed into a constraint and a second priority objective function is optimized. This approach is repeated until the last objective function is optimized. The user needs to choose the sequence of objectives. Two distinct lexicographic optimizations with distinct sequences of objective functions do not produce the same solution. The disadvantages of such approaches are the choice of weights, constraints, and importance of the functions, respectively, which are not known in advance and they have to be specified from the beginning. Some other techniques [12,13,14] that do not need any prior information are developed for solving unconstrained multiobjective optimization problems (UMOP) with at most linear convergence rate. Other methods like heuristic approaches or evolutionary approaches [15] provide an approximate Pareto front but do not guarantee the convergence property.
Newton’s method [16] that solves the single-objective optimization problems is extended for solving (UMOP), which is based on an a priori parameter-free optimization method [17]. In this case, the objective functions are twice continuously differentiable, no other parameter or ordering of the functions is needed, and each objective function is replaced with a quadratic model. The rate of convergence is observed as superlinear, and it is quadratic if the second-order derivative is Lipschitz continuous. Newton’s method is also studied under the assumptions of Banach and Hilbert spaces for finding the efficient solutions of (UMOP) [18]. A new type of Quasi-Newton algorithm is developed to solve the nonsmooth multiobjective optimization problems, where the directional derivative of every objective function exists [19].
A necessary condition for finding the vector critical point of (UMOP) is introduced in the steepest descent algorithm [12], where neither weighting factors nor ordering information for the different objective functions are assumed to be known. The relationship between critical points and efficient points is discussed in [17]. If the domain of (UMOP) is a convex set and the objective functions are convex component-wise then every critical point is the weak efficient point, and if the objective functions are strictly convex component-wise, then every critical point is the efficient point. The new classes of vector invex and pseudoinvex functions for (UMOP) are also characterized in terms of critical points and (weak) efficient points [20] by using Fritz John (FJ) optimality conditions and Karush–Kuhn–Tucker (KKT) conditions. Our focus is on Newton’s direction for a standard scalar optimization problem which is implicitly induced by weighting the several objective functions. The weighting values are a priori unknown and non-negative KKT multipliers, that is, they are not required to fix in advance. Every new point generated by the Newton algorithm [17] initiates such weights in the form of KKT multipliers.
Quantum calculus or q-calculus is also called calculus without limits. The q-analogues of mathematical objects can be again recaptured as q 1 . The history of quantum calculus can be traced back to Euler (1707–1783), who first proposed the quantum q in Newton’s infinite series. In recent years, many researchers have shown considerable interest in examining and exploring the quantum calculus. Therefore, it emerges as an interdisciplinary subject. Of course, the quantum analysis is very useful in numerous fields such as in signal processing [21], operator theory [22], fractional integral and derivatives [23], integral inequalities [24], variational calculus [25], transform calculus [26], sampling theory [27], etc. The quantum calculus is seen as the bridge between mathematics and physics. To study some recent developments in quantum calculus, interested researches should refer to [28,29,30,31].
The q-calculus was first studied in the area of optimization [32], where the q-gradient is used in steepest descent method to optimize objective functions. Further, global optimum was searched using q-steepest descent method and q-conjugate gradient method where a descent scheme is presented using q-calculus with the stochastic approach which does not focus on the order of convergence of the scheme [33]. The q-calculus is applied in Newton’s method to solve unconstrained single objective optimization [34]. Further, this idea is extended to solve (UMOP) within the context of the q-calculus [35].
In this paper, we present the q-calculus in Quasi-Newton’s method for solving (UMOP). We approximate the second q-derivative matrices instead of evaluating them. Using q-calculus, we present the convergence rate is superlinear.
The rest of this paper is organized as follows. Section 2 recalls the problem, notation, and preliminaries. Section 3 derives a q-Quasi-Newton direction search method solved by (KKT) conditions. Section 4 establishes the algorithms for convergence analysis. The numerical results are given in Section 5 and the conclusion is in the last section.

2. Preliminaries

Denote R as the set of real numbers, N as the set of positive integers, and R + or ( R ) as the set of strictly positive or (negative) real numbers. If a function is continuous on any interval excluding zero, then the function is called continuous q-differentiable. For a function f : R R , the q-derivative of f [36] denoted as D q , x f , is given as
D q , x f ( x ) = f ( x ) f ( q x ) ( 1 q ) x , x 0 , q 1 f ( x ) , x = 0 .
Suppose f : R n R , whose partial derivatives exist. For x R n , consider an operator ϵ q , i on f as
( ε q , i ) f ( x ) = f ( x 1 , x 2 , , q x i , x i + 1 , , x n ) .
The q-partial derivative of f at x with respect to x i , indicated by D q , x i f , is [23]:
D q , x i f ( x ) = f ( x ) ( ε q , i f ) ( x ) ( 1 q ) x i , x i 0 , q 1 , f x i , x i = 0 .
We are interested to solve the following (UMOP):
minimize F ( x ) subject to x X ,
where X R n is a feasible region and F : X R m . Note that the function F = ( f 1 , f 2 , , f m ) is a vector function whose components are real valued functions such as f j : X R , where j = 1 , , m . In general, n and m are independent. For x , y R n , we present the vector inequalities as:
x = y x i = y i ; i = 1 , , n , x y x i y i i = 1 , , n , x y x i y i and x y , x > y x i > y i i = 1 , , n .
A point x * X is called Pareto optimal point such that there is no any point x X , for which F ( x ) F ( x * ) , and F ( x ) F ( x * ) . A point x * X is called weakly Pareto optimal point if there is no x X for which F ( x ) < F ( x * ) . Similarly, a point x * X is a local Pareto optimal if there exists a neighborhood Y X of x * such that the point x * is a Pareto optima for F restricted on Y. Similarly, a point x * is a local weak Pareto optima if there exists a neighborhood Y X of x * such that the point x * is a weak Pareto optimal for F restricted on Y. The matrix J F ( x ) R m × n is the Jacobian matrix of f j at x, i.e., the j-th row of J F ( x ) is q f j ( x ) (q-gradient) for all j = 1 , , m . Let W f j ( x ) be the Hessian matrix of f j at x for all j = 1 , , m . Note that every Pareto optimal point is a weakly Pareto optimal point [37]. The directional derivative of f j at x in the descent direction d q is given as:
f j ( x , d q ) = lim α 0 f j ( x + α d q ) f j ( x ) α
The necessary condition to get the critical point for multiobjective optimization problems is given in [17]. For any x R n , x denotes the Euclidean norm in R n . Let K ( x 0 , r ) = { x : x x 0 r } with a center x 0 R n and radius r R + . Norm of the matrix A R n × n is A = max x R n × n A x x , x 0 . The following proposition indicates that when f ( x ) is a linear function, then the q-gradient is similar to the classical gradient.
Proposition 1
([33]). If f ( x ) = a + p T x , where a R and p R n , then for any x R n , and q ( 0 , 1 ) , we have q f ( x ) = f ( x ) = p .
All the quasi-Newton methods approximate the Hessian of function f as W k R n × n , and update the new formula based on previous approximation [38]. Line search methods are imperative methods for (UMOP) in which a search direction is first computed and then along this direction a step-length is chosen. The entire process is an iterative.

3. The q -Quasi-Newton Direction for Multiobjective

The most well-known quasi-Newton method for single objective function is the BFGS (Broyden, Fletcher, Goldfarb, and Shanno) method. This is a line search method along with a descent direction d q k within the context of q-derivative, given as:
d q k = ( W k ) 1 q f ( x k ) ,
where f is a continuously q-differentiable function, and W k R n × n is a positive definite matrix that is updated at every iteration. The new point is:
x k + 1 = x k + α k d q k .
In the case of the Steepest Descent method and Newton’s method, W k is taken to be an Identity matrix and exact Hessian of f, respectively. The quasi-Newton BFGS scheme generates the next W k + 1 as
W k + 1 = W k W k s k ( s k ) T W k ( s k ) T W k s k + y k ( y k ) T ( s k ) T y k ,
where s k = x k + 1 x k = α k d q k , and y k = q f ( x k + 1 ) q f ( x k ) . In Newton’s method, second-order differentiability of the function is required. While calculating W k , we use q-derivative which behaves like a Hessian matrix of f ( x ) . W k + 1 may not be a positive definite, which can be modified to be a positive definite through the symmetric indefinite factorization [39]. The q-Quasi-Newton’s direction d q ( x ) is an optimal solution of the following modified problem [40] as:
min d q R n max j = 1 , , m q f j ( x ) d q + 1 2 d q T W j ( x ) T d q ,
where W j ( x ) is computed as (8). The solution and optimal value of (9) are:
ψ ( x ) = min d q R n max j = 1 , , m q f j ( x ) T d q + 1 2 d q T W j ( x ) d q ,
and
d q ( x ) = arg min d q R n max j = 1 , , m f j ( x ) T d q + 1 2 d q T W j ( x ) d q .
The problem (9) becomes a convex quadratic optimization problem (CQOP) as follows:
minimize h ( t , d q ) = t , subject to q f j ( x ) T d q + 1 2 d q T W j ( x ) d q t 0 , j = 1 , , m , where ( t , d q ) R × R n .
The Lagrangian function of (CQOP) is:
L ( ( t , d q ) , λ ) = t + j = 1 m λ j q f j ( x ) T d q + 1 2 d q T W j ( x ) d q t .
For λ = ( λ 1 , λ 2 , , λ m ) T , we obtain the following (KKT) conditions [40]:
j = 1 m λ j q f j ( x ) + W j ( x ) d q = 0 ,
λ j 0 , j = 1 , , m ,
j = 1 m λ j = 1 ,
q f j ( x ) T d q + 1 2 d q T W j ( x ) d q t , j = 1 , , m ,
λ j q f j ( x ) T d q + 1 2 d q T W j ( x ) d q t = 0 , j = 1 , , m .
The solution ( d q ( x ) , ψ ( x ) ) is unique, and set λ j = λ j ( x ) for all j = 1 , , m with d q = d q ( x ) and t = ψ ( x ) for satisfying (14)–(18). From (14), we obtain
d q ( x ) = j = 1 m λ j ( x ) W j ( x ) 1 j = 1 m λ j ( x ) q f j ( x ) .
This is a so-called q-Quasi-Newton’s direction for solving (UMOP). We present the basic result for relating the stationary condition at a given point x to its q-Quasi-Newton direction d q ( x ) and function  ψ .
Proposition 1.
Let ψ : X R and d q : X R n be given by (10) and (11), respectively, and W j ( x ) 0 for all x X . Then,
1. 
ψ ( x ) 0 for all x X .
2. 
The conditions below are equivalent:
(a) 
The point x is non stationary.
(b) 
d q ( x ) 0
(c) 
ψ ( x ) < 0 .
(d) 
d q ( x ) is a descent direction.
3. 
The function ψ is continuous.
Proof. 
Since d q = 0 , then from (10), we have
ψ ( x ) min d q R n max j = 1 , , m q f j ( x ) T 0 + 1 2 d q T W j ( x ) 0 = 0 ,
thus ψ ( x ) 0 . It means that J F ( x * ) d q ( x ) R m . Thus, the given point x R n is non-stationary. Since W j ( x ) is positive definite, and from (10) and (11), we have
q f j ( x ) T d q ( x ) < f j ( x ) T d q ( x ) + 1 2 d q ( x ) T W j ( x ) T d q ( x ) = ψ ( x ) 0 .
Since ψ ( x ) is the optimal value of (CQOP), and it is negative, thus solution of (CQOP) can never be d q ( x ) = 0 . It is sufficient to show that the continuity [41] of ψ in set Y X . Since ψ ( x ) 0 , then
q f j ( x ) T d q ( x ) 1 2 d q ( x ) T W j ( x ) d q ( x ) ,
for all j = 1 , , m , and W j ( x ) , where j = 1 , m are positive definite for all x Y . Thus, the eigenvalues of Hessian matrices W j ( x ) , where j = 1 , , m are uniformly bounded away from zero on Y so there exists R , S R + such that
R = max x Y , j = 1 , , m q f j ( x ) ,
and
S = min x Y , e = 1 , j = 1 , , m e T W j ( x ) e .
From (20) and using Cauchy–Schwarz inequality, we get
q f j ( x ) d q ( x ) 1 2 S d q ( x ) 2 R d q ( x ) ,
that is,
d q ( x ) 2 R S ,
for all x Y , that is, Newton’s direction is uniformly bounded on Y. We present the family of function { x , j } x Y , j = 1 , , m , where
x , j : Y R ,
and
z q f ( z ) T d q ( x ) + 1 2 d q ( x ) T W j ( x ) d q ( x ) .
We shall prove that this family of functions is uniformly equicontinuous. For small value ϵ z R + there exists δ z R + , and for y K ( z , δ z ) , we have
W j ( y ) q 2 f j ( z ) < ϵ z 2 ,
and
q 2 f j ( y ) q 2 f j ( z ) < ϵ z 2 ,
for all j = 1 , , m . because of q-continuity of Hessian matrices, the second inequality is true. Since Y is compact space, then there exists a finite sub-cover.
ψ x , j ( z ) = q f j ( z ) T d q ( x ) + 1 2 d q ( x ) T W j ( x ) d q ( x ) ,
that is
ψ x , j ( z ) = q f j ( z ) T d q ( x ) + 1 2 d q ( x ) T 2 f j ( z ) d q ( x ) + 1 2 d q ( x ) T ( W j ( z ) q 2 f j ( z ) d q ( x ) ) .
To show the q-continuous of last term, set y 1 , y 2 Y such that y 1 y 2 < δ for small δ R + , then
| 1 2 d q ( x ) T W j ( y 1 ) q 2 f j ( y 1 ) d q ( x ) 1 2 d q ( x ) T W j ( y 2 ) q 2 f j ( y 2 ) d q ( x ) | 1 2 d q ( x ) 2 ( B j ( y 1 ) 2 f j ( z 1 ) ) + q 2 f j ( z 2 ) q 2 f j ( z 1 + B j ( y 2 ) 2 f j ( z 21 ) ) + q 2 f j ( z 2 ) q 2 f j ( z 21 ) 1 2 d q ( x ) 2 ( ε z 1 + ε z 2 ) .
ψ x , j is uniformly continuous [40] for all x Y and for all j = 1 , , m . There exists δ R + such that for all y , z Y , y z < δ implies | ψ ( y ) ψ ( z ) | < ϵ for all x Y . Thus, y z < δ .
ψ ( z ) max j = 1 , , m f j ( z ) T d q ( y ) + 1 2 d q ( y ) T W j ( z ) d q ( y ) = ϕ y ( z ) ϕ y ( y ) + | ϕ y ( z ) ϕ y ( y ) | < ψ ( y ) + ϵ .
Thus, ψ ( z ) ψ ( y ) < ϵ . If we interchange y and z, then | ψ ( z ) ψ ( y ) | < ϵ . It proves the continuity of ψ .  □
The following modified lemma is due to [17,42].
Lemma 1.
Let F : R n R m be continuously q-differentiable. If x * X is not a critical point for q ( x ) d q < 0 , where d q R n , σ ( 0 , 1 ] , and ε > 0 . Then,
x + α d q ( x ) X and F ( x + α d q ( x ) ) < F ( x ) + α γ ψ ( x ) ,
for any α ( 0 , σ ] and γ ( 0 , ε ] .
Proof. 
Since x * is not a critical point, then ψ ( x ) < 0 . Let r > 0 such that B ( x , r ) X and α ( 0 , σ ] . Therefore,
F ( x + α d q ( x ) ) F ( x ) = α q F ( x ) T d q ( x ) + o j ( α d q ( x ) , x )
Since q ( x ) d q ( x ) < ψ ( x ) , for α ( 0 , σ ] , then
F ( x + α d q ( x ) ) F ( x ) = α γ ψ ( x ) + α ( 1 σ ) ψ ( x ) + o j ( α d q ( x ) , x ) .
The last term in the right-hand side of the above equation is non-positive because ψ ( x ) ψ ( x * ) 2 < 0 , for  α [ 0 , σ ] . □

4. Algorithm and Convergence Analysis

We first present the following Algorithm 1 [43] to find the gradient of the function using q-calculus. The higher-order q-derivative of f can be found in [44].
Algorithm 1q-Gradient Algorithm
1:
Input q ( 0 , 1 ) , f ( x ) , x R , z .  
2:
if x = 0 then  
3:
    Set g lim f ( z ) f ( q * z ) ( z q * z ) , z , 0 .  
4:
else 
5:
    Set g f ( x ) f ( q * x ) ( x q * x ) .  
6:
Print q f ( x ) g .
Example 1.
Given that f : R 2 R defined by f ( x 1 , x 2 ) = x 2 2 + 3 x 1 3 . Then q f ( x ) = 3 x 1 2 ( 1 + q + q 2 ) x 2 ( 1 + q ) .
We are now prepared to write the unconstrained q-Quasi-Newton’s Algorithm 2 for solving (UMOP). At each step, we solve the (CQOP) to find the q-Quasi-Newton direction. Then, we obtain the step length using the Armijo line search method. In every iteration, the new point and Hessian approximation are generated based on historical values.
Algorithm 2q-Quasi-Newton’s Algorithm for Unconstrained Multiobjective (q-QNUM)
1:
Choose q ( 0 , 1 ) , x 0 X , symmetric definite matrix W 0 R n × n , c ( 0 , 1 ) , and a small tolerance value ϵ > 0 .  
2:
for k=0,1,2,... do 
3:
    Solve (CQOP).  
4:
    Compute d q k and ψ k .  
5:
    if ψ k > ϵ then  
6:
        Stop.  
7:
    else 
8:
        Choose α k as the α ( 0 , 1 ] such that x k + α d q k X and F ( x k + α d q k ) F ( x k ) + c α ψ k .  
9:
        Update x k + 1 x k + α k d q k .  
10:
        Update W j k + 1 , where j = 1 , , m using (8).
We now finally start to show that every sequence produced by the proposed method converges to a weakly efficient point. It does not matter how poorly the initial point is guessed. We assume that the method does not stop, and produces an infinite sequence of iterates. We now present the modified sufficient conditions for the superlinear convergence [17,40] within the context of q-calculus.
Theorem 1.
Let { x k } be a sequence generated by (q-QNUM), and Y X be a convex set. Also, γ ( 0 , 1 ) and r , a , b , δ , ϵ > 0 , and
(a) 
a I W j ( x ) b I for all x Y , j = 1 , , m ,
(b) 
q 2 f j ( y ) q 2 f j ( x ) , < ε 2 for all x , y Y with y x δ ,
(c) 
( W j k q 2 f j ( x k ) ) ( y x k ) < ϵ 2 y x k for all k k 0 , y Y , j = 1 , , m ,
(d) 
ε a 1 c ,
(e) 
B ( x 0 , r ) Y ,
(f) 
d q ( x 0 ) < min { δ , r ( 1 ε a ) } .
Then, for all k k 0 , we have that
1. 
x k x k 0 d q ( x 0 ) 1 ( ε a ) k k 0 1 ( ε a )
2. 
α k = 1 ,
3. 
d q ( x k ) d q ( x k 0 ) ( ε a ) k k 0 ,
4. 
d q ( x k + 1 ) d q ( x k ) ε a .
Then, the sequence { x k } converges to local Pareto points x * R m , and the convergence rate is superlinear.
Proof. 
From part 1, part 3 of this theorem and triangle inequality,
x k + d q ( x k ) x 0 1 ϵ a k + 1 1 ϵ a d q ( x k 0 ) .
From (d) and (f), we follow x k , x k + d q ( x k ) K ( x k 0 , r ) and x k + d q ( x k ) x k < δ . We also have
f j ( x k + d q ( x k ) ) f j ( x k ) + d q ( x k ) q f ( x k ) + 1 2 d q ( x k ) ( q 2 f ) ( x k ) + 1 2 d q ( x k ) 2 ,
that is,
f j ( x k + d q ( x k ) ) f j ( x k ) + ψ ( x k ) + ε 2 d q ( x k ) 2 = f j ( x k ) + γ ψ ( x k ) + ( 1 γ ) ψ ( x k ) + ε 2 d q ( x k ) 2 .
Since ψ 0 and ( 1 γ ) ψ ( x k ) + ϵ 2 d q ( x k ) 2 ( ε a ( 1 γ ) ) d q ( x k ) 2 2 0 , we get
f j ( x k + d q ( x k ) ) f j ( x k ) + γ ψ ( x k ) ,
for all j = 1 , , m . The Armijo conditions holds for α k = 1 . Part 1 of this theorem holds. We now set x k , x k + 1 K ( x k 0 , r ) , and x k + 1 x k < δ . Thus, we get x k + 1 = x k + d q ( x k ) . We now define v ( x k + 1 ) = j = 1 m λ j k q f j ( x k + 1 ) . Therefore,
| ψ ( x k + 1 ) | 1 2 a v ( x k + 1 ) 2 .
We now estimate v ( x k + 1 ) . For x X , we define
G k ( x ) : = j = 1 m λ j k f j ( x k + 1 ) ,
and
H k = j = 1 m λ j k W j ( x k ) ,
where λ j k 0 , for all j = 1 , , m , are KKT multipliers. We obtain following:
q G k ( x ) = j = 1 m λ j k q f j ( x ) ,
and
q 2 G k ( x ) = j = 1 m λ j k q 2 f j ( x ) .
Then, v k + 1 = q G k ( x k + 1 ) . We get
d q ( x k ) = ( H k ) 1 q G k ( x k ) .
From assumptions (b) and (c) of this theorem,
q 2 G k ( y ) q 2 G k ( x k ) < ε 2 ,
( H k q 2 G k ( x k ) ) ( y x k ) < ε 2 y x k
hold for all x , y Y with y x < δ and k k 0 . We have
q G k ( x k + d q ( x k ) ) ( q G k ( x k ) + H k d q ( x k ) ) < ϵ d q ( x k ) .
Since q G k ( x k ) + H k d q ( x k ) = 0 , then
v ( x k + 1 ) = q G k ( x k + 1 ) < ϵ d q ( x k ) ,
and
| ψ k + 1 | 1 2 a v ( x k + 1 ) 2 < ϵ 2 2 a d q ( x k ) 2 .
We have
a 2 d q ( x k + 1 ) 2 < ϵ 2 2 a d q ( x k ) 2 .
Thus,
d q ( x k + 1 ) < ϵ a d q ( x k )
Thus, part 4 is proved. We finally prove superlinear convergence of { x k } . First we define
r k = d q 0 ε a k k 0 1 ε a ,
and
δ k = d q k 0 ε a k k 0 .
From triangle inequality, assumptions (e), (f) and part 1, we have K ( x k , r k ) K ( x k 0 , r ) V . Choose any τ R + , and define
ε ¯ = min { a τ 1 + 2 τ , ε } .
For k k 0 inequalities
q 2 f j ( y ) q 2 f j ( x ) < ε ¯ 2
for all x , y K ( x k , r k ) with y x < δ k , and
W j ( x l ) q 2 f j ( x l ) ( y x l ) < ε ¯ 2
for all y K ( x k , r k ) and l k holds both for j = 1 , , m . Assumptions (a)–(f) are satisfied for ε ¯ , r k , δ k , and x k instead of ϵ , r , δ , and x 0 , respectively. We have
x l x k d q ( x k ) 1 ( ε ¯ a ) l k 1 ε ¯ a .
Let l and we get x * x k d q ( x k ) 1 1 ε ¯ a . Using the last inequality, and part 4, we have
x * x k + 1 d q ( x k + 1 ) 1 1 ε ¯ a d q ( x k ) ε ¯ a 1 ϵ ¯ a .
From above and triangle inequality, we have
x * x k + 1 x k + 1 x k x * x k + 1 ,
that is,
x * x k + 1 d q ( x k ) d k ε ¯ 1 ε ¯ a = d q k 1 2 ε ¯ a 1 ε ¯ a .
Since 1 2 ε ¯ a > 0 , and 1 2 ε ¯ a > 0 , then we get
x * x k + 1 τ x * x k ,
where τ R + is chosen arbitrarily. Thus, the sequence { x k } converges superlinearly to x * .  □

5. Numerical Results

The proposed algorithm (q-QNUM), i.e., Algorithm 2, presented in Section 4 is implemented in MATLAB (2017a) and tested on some test problems known from the literature. All tests were run under the same conditions. The box constraints of the form l b x u b are used for each test problem. These constraints are considered under the direction search problem (CQOP) such that the newly generated point always lies in the same box, that is, l b x + d q u b holds. We use the stopping criteria at x k as: ψ ( x k ) > ϵ where ϵ R + . All test problems given in Table 1 are solved 100 times. The starting points are randomly chosen from a uniform distribution between l b and u b . The first column in the given table is the name of the test problem. We use the abbreviation of author’s names and number of the problem in the corresponding paper. The second column indicates the source of the paper. The third column is for lower bound and upper bound. We compare the results of (q-QNUM) with (QNMO) of [40] in the form of a number of iterations ( i t e r ), number of objective functions evaluation ( o b j ), and number of gradient evaluations ( g r a d ), respectively. From Table 1, we can conclude that our algorithm shows better performance.
Example 2.
Find the approximate Pareto front using (q-QNUM) and (QNMO) for the given (UMOP) [45]:
Minimize f 1 ( x 1 , x 2 ) = ( x 1 1 ) 2 + ( x 1 x 2 ) 2 , Minimize f 2 ( x 1 , x 2 ) = ( x 2 3 ) 2 + ( x 1 x 2 ) 2 ,
where 3 x 1 , x 2 10 .
The number of Pareto points generated due to (q-QNUM) with Algorithm 1 and (QNMO) is shown in Figure 1. One can observe that the number of iterations as i t e r = 200 in (q-QNUM) and i t e r = 525 in (QNMO) are responsible for generating the approximate Pareto front of above (UMOP).

6. Conclusions

The q-Quasi-Newton method converges superlinearly to the solution of (UMOP) if all objective functions are strongly convex within the context of q-derivative. In a neighborhood of this solution, the algorithm uses a full Armijo steplength. The numerical performance of the proposed algorithm is faster than their actual evaluation.

Author Contributions

K.K.L. gave reasonable suggestions for this manuscript; S.K.M. gave the research direction of this paper; B.R. revised and completed this manuscript. All authors have read and agreed to the published version of the manuscript.

Funding

This research was supported by the Science and Engineering Research Board (Grant No. DST-SERB- MTR-2018/000121) and the University Grants Commission (IN) (Grant No. UGC-2015-UTT–59235).

Acknowledgments

The authors are grateful to the anonymous reviewers and the editor for the valuable comments and suggestions to improve the presentation of this paper.

Conflicts of Interest

The authors declare no conflict of interest.

References

  1. Eschenauer, H.; Koski, J.; Osyczka, A. Multicriteria Design Optimization: Procedures and Applications; Springer: Berlin, Germany, 1990. [Google Scholar]
  2. Haimes, Y.Y.; Hall, W.A.; Friedmann, H.T. Multiobjective Optimization in Water Resource Systems; Elsevier Scientific: Amsterdam, The Netherlands, 1975. [Google Scholar]
  3. Nwulu, N.I.; Xia, X. Multi-objective dynamic economic emission dispatch of electric power generation integrated with game theory based demand response programs. Energy Convers. Manag. 2015, 89, 963–974. [Google Scholar] [CrossRef]
  4. Badri, M.A.; Davis, D.L.; Davis, D.F.; Hollingsworth, J. A multi-objective course scheduling model: Combining faculty preferences for courses and times. Comput. Oper. Res. 1998, 25, 303–316. [Google Scholar] [CrossRef] [Green Version]
  5. Ishibuchi, H.; Nakashima, Y.; Nojima, Y. Performance evaluation of evolutionary multiobjective optimization algorithms for multiobjective fuzzy genetics-based machine learning. Soft Comput. 2011, 15, 2415–2434. [Google Scholar] [CrossRef]
  6. Liu, S.; Vicente, L.N. The stochastic multi-gradient algorithm for multi-objective optimization and its application to supervised machine learning. arXiv 2019, arXiv:1907.04472. [Google Scholar]
  7. Tavana, M. A subjective assessment of alternative mission architectures for the human exploration of mars at NASA using multicriteria decision making. Comput. Oper. Res. 2004, 31, 1147–1164. [Google Scholar] [CrossRef]
  8. Gass, S.; Saaty, T. The computational algorithm for the parametric objective function. Nav. Res. Logist. Q. 1955, 2, 39–45. [Google Scholar] [CrossRef]
  9. Miettinen, K. Nonlinear Multiobjective Optimization; Kluwer Academic: Boston, MA, USA, 1999. [Google Scholar]
  10. Fishbum, P.C. Lexicographic orders, utilities and decision rules: A survey. Manag. Sci. 1974, 20, 1442–1471. [Google Scholar] [CrossRef] [Green Version]
  11. Coello, C.A. An updated survey of GA-based multiobjective optimization techniques. ACM Comput. Surv. (CSUR) 2000, 32, 109–143. [Google Scholar] [CrossRef]
  12. Fliege, J.; Svaiter, B.F. Steepest descent method for multicriteria optimization. Math. Method. Oper. Res. 2000, 51, 479–494. [Google Scholar] [CrossRef]
  13. Drummond, L.M.G.; Iusem, A.N. A projected gradient method for vector optimization problems. Comput. Optim. Appl. 2004, 28, 5–29. [Google Scholar] [CrossRef]
  14. Drummond, L.M.G.; Svaiter, B.F. A steepest descent method for vector optimization. J. Comput. Appl. Math. 2005, 175, 395–414. [Google Scholar] [CrossRef] [Green Version]
  15. Branke, J.; Dev, K.; Miettinen, K.; Slowiński, R. (Eds.) Multiobjective Optimization: Interactive and Evolutionary Approaches; Springer: Berlin, Germany, 2008. [Google Scholar]
  16. Mishra, S.K.; Ram, B. Introduction to Unconstrained Optimization with R; Springer Nature: Singapore, 2019; pp. 175–209. [Google Scholar]
  17. Fliege, J.; Drummond, L.M.G.; Svaiter, B.F. Newton’s method for multiobjective optimization. SIAM J. Optim. 2009, 20, 602–626. [Google Scholar] [CrossRef] [Green Version]
  18. Chuong, T.D. Newton-like methods for efficient solutions in vector optimization. Comput. Optim. Appl. 2013, 54, 495–516. [Google Scholar] [CrossRef]
  19. Qu, S.; Liu, C.; Goh, M.; Li, Y.; Ji, Y. Nonsmooth Multiobjective Programming with Quasi-Newton Methods. Eur. J. Oper. Res. 2014, 235, 503–510. [Google Scholar] [CrossRef]
  20. Jiménez, M.A.; Garzón, G.R.; Lizana, A.R. (Eds.) Optimality Conditions in Vector Optimization; Bentham Science Publishers: Sharjah, UAE, 2010. [Google Scholar]
  21. Al-Saggaf, U.M.; Moinuddin, M.; Arif, M.; Zerguine, A. The q-least mean squares algorithm. Signal Process. 2015, 111, 50–60. [Google Scholar] [CrossRef]
  22. Aral, A.; Gupta, V.; Agarwal, R.P. Applications of q-Calculus in Operator Theory; Springer: New York, NY, USA, 2013. [Google Scholar]
  23. Rajković, P.M.; Marinković, S.D.; Stanković, M.S. Fractional integrals and derivatives in q-calculus. Appl. Anal. Discret. Math. 2007, 1, 311–323. [Google Scholar]
  24. Gauchman, H. Integral inequalities in q-calculus. Comput. Math. Appl. 2004, 47, 281–300. [Google Scholar] [CrossRef] [Green Version]
  25. Bangerezako, G. Variational q-calculus. J. Math. Anal. Appl. 2004, 289, 650–665. [Google Scholar] [CrossRef] [Green Version]
  26. Abreu, L. A q-sampling theorem related to the q-Hankel transform. Proc. Am. Math. Soc. 2005, 133, 1197–1203. [Google Scholar] [CrossRef]
  27. Koornwinder, T.H.; Swarttouw, R.F. On q-analogues of the Fourier and Hankel transforms. Trans. Am. Math. Soc. 1992, 333, 445–461. [Google Scholar]
  28. Ernst, T. A Comprehensive Treatment of q-Calculus; Springer: Basel, Switzerland; Heidelberg, Germany; New York, NY, USA; Dordrecht, The Netherlands; London, UK, 2012. [Google Scholar]
  29. Noor, M.A.; Awan, M.U.; Noor, K.I. Some quantum estimates for Hermite-Hadamard inequalities. Appl. Math. Comput. 2015, 251, 675–679. [Google Scholar] [CrossRef]
  30. Pearce, C.E.M.; Pec̆arić, J. Inequalities for differentiable mappings with application to special means and quadrature formulae. Appl. Math. Lett. 2000, 13, 51–55. [Google Scholar] [CrossRef] [Green Version]
  31. Ernst, T. A Method for q-Calculus. J. Nonl. Math. Phys. 2003, 10, 487–525. [Google Scholar] [CrossRef] [Green Version]
  32. Sterroni, A.C.; Galski, R.L.; Ramos, F.M. The q-gradient vector for unconstrained continuous optimization problems. In Operations Research Proceedings; Hu, B., Morasch, K., Pickl, S., Siegle, M., Eds.; Springer: Heidelberg, Germany, 2010; pp. 365–370. [Google Scholar]
  33. Gouvêa, E.J.C.; Regis, R.G.; Soterroni, A.C.; Scarabello, M.C.; Ramos, F.M. Global optimization using q-gradients. Eur. J. Oper. Res. 2016, 251, 727–738. [Google Scholar] [CrossRef]
  34. Chakraborty, S.K.; Panda, G. Newton like line search method using q-calculus. In International Conference on Mathematics and Computing. Communications in Computer and Information Science; Giri, D., Mohapatra, R.N., Begehr, H., Obaidat, M., Eds.; Springer: Singapore, 2017; Volume 655, pp. 196–208. [Google Scholar]
  35. Mishra, S.K.; Panda, G.; Ansary, M.A.T.; Ram, B. On q-Newton’s method for unconstrained multiobjective optimization problems. J. Appl. Math. Comput. 2020. [Google Scholar] [CrossRef]
  36. Jackson, F.H. On q-functions and a certain difference operator. Earth Environ. Sci. Trans. R. Soc. Edinb. 1908, 46, 253–281. [Google Scholar] [CrossRef]
  37. Bento, G.C.; Neto, J.C. A subgradient method for multiobjective optimization on Riemannian manifolds. J. Optimiz. Theory App. 2013, 159, 125–137. [Google Scholar] [CrossRef]
  38. Andrei, N. A diagonal quasi-Newton updating method for unconstrained optimization. Numer. Algorithms 2019, 81, 575–590. [Google Scholar] [CrossRef]
  39. Nocedal, J.; Wright, S.J. Numerical Optimization, 2nd ed.; Springer Series in Operations Research and Financial Engineering; Springer: New York, NY, USA, 2006. [Google Scholar]
  40. Povalej, Z. Quasi-Newton’s method for multiobjective optimization. J. Comput. Appl. Math. 2014, 255, 765–777. [Google Scholar] [CrossRef]
  41. Ye, Y.L. D-invexity and optimality conditions. J. Math. Anal. Appl. 1991, 162, 242–249. [Google Scholar] [CrossRef] [Green Version]
  42. Morovati, V.; Basirzadeh, H.; Pourkarimi, L. Quasi-Newton methods for multiobjective optimization problems. 4OR-Q. J. Oper. Res. 2018, 16, 261–294. [Google Scholar] [CrossRef]
  43. Samei, M.E.; Ranjbar, G.K.; Hedayati, V. Existence of solutions for equations and inclusions of multiterm fractional q-integro-differential with nonseparated and initial boundary conditions. J. Inequal Appl. 2019, 273. [Google Scholar] [CrossRef] [Green Version]
  44. Adams, C.R. The general theory of a class of linear partial difference equations. Trans. Am. Math.Soc. 1924, 26, 183–312. [Google Scholar]
  45. Sefrioui, M.; Perlaux, J. Nash genetic algorithms: Examples and applications. In Proceedings of the 2000 Congress on Evolutionary Computation, La Jolla, CA, USA, 16–19 July 2000; Volume 1, pp. 509–516. [Google Scholar]
  46. Huband, S.; Hingston, P.; Barone, L.; While, L. A review of multiobjective test problems and a scalable test problem toolkit. IEEE T. Evolut. Comput. 2006, 10, 477–506. [Google Scholar] [CrossRef] [Green Version]
  47. Ikeda, K.; Kita, H.; Kobayashi, S. Failure of Pareto-based MOEAs: Does non-dominated really mean near to optimal? In Proceedings of the 2001 Congress on Evolutionary Computation, Seoul, Korea, 27–30 May 2001; Volume 2, pp. 957–962. [Google Scholar]
  48. Shim, M.B.; Suh, M.W.; Furukawa, T.; Yagawa, G.; Yoshimura, S. Pareto-based continuous evolutionary algorithms for multiobjective optimization. Eng Comput. 2002, 19, 22–48. [Google Scholar] [CrossRef] [Green Version]
  49. Valenzuela-Rendón, M.; Uresti-Charre, E.; Monterrey, I. A non-generational genetic algorithm for multiobjective optimization. In Proceedings of the Seventh International Conference on Genetic Algorithms, East Lansing, MI, USA, 19–23 July 1997; pp. 658–665. [Google Scholar]
  50. Vlennet, R.; Fonteix, C.; Marc, I. Multicriteria optimization using a genetic algorithm for determining a Pareto set. Int. J. Syst. Sci. 1996, 27, 255–260. [Google Scholar] [CrossRef]
Figure 1. Approximate Pareto Front of Example 1.
Figure 1. Approximate Pareto Front of Example 1.
Mathematics 08 00616 g001
Table 1. Numerical Results of Test Problems.
Table 1. Numerical Results of Test Problems.
ProblemSource[lb,ub](q-QNUM)(QNMO)
iter obj grad iter obj grad
BK1[46][−5, 10]200200200200200200
MOP5[46][−30, 30]141965612333518479
MOP6[46][0, 1]2502177171218120082001
MOP7[46][−400, 400]20020020075110611060
DG01[47][−10, 13]175724724164890890
IKK1[47][−50, 50]170170170253254253
SP1[45][−3, 2]200200200525706706
SSFYY1[45][−2, 2]200200200200300300
SSFYY2[45][−100, 100]263277277263413413
SK1[48][−10, 10]1391152115287732791
SK2[48][−3, 11]1541741132080419891829
VU1[49][−3, 3]3161108110811,36119,52111,777
VU2[49][−3, 7]991882188210019001900
VFM1[50][−2, 2]195195195195290290
VFM2[50][−4, 4]200200200524693678
VFM3[50][−3, 3]16111306016901002981

Share and Cite

MDPI and ACS Style

Lai, K.K.; Mishra, S.K.; Ram, B. On q-Quasi-Newton’s Method for Unconstrained Multiobjective Optimization Problems. Mathematics 2020, 8, 616. https://doi.org/10.3390/math8040616

AMA Style

Lai KK, Mishra SK, Ram B. On q-Quasi-Newton’s Method for Unconstrained Multiobjective Optimization Problems. Mathematics. 2020; 8(4):616. https://doi.org/10.3390/math8040616

Chicago/Turabian Style

Lai, Kin Keung, Shashi Kant Mishra, and Bhagwat Ram. 2020. "On q-Quasi-Newton’s Method for Unconstrained Multiobjective Optimization Problems" Mathematics 8, no. 4: 616. https://doi.org/10.3390/math8040616

Note that from the first issue of 2016, this journal uses article numbers instead of page numbers. See further details here.

Article Metrics

Back to TopTop