Next Article in Journal
Carbon Futures Trading and Short-Term Price Prediction: An Analysis Using the Fractal Market Hypothesis and Evolutionary Computing
Previous Article in Journal
Evolving Deep Learning Convolutional Neural Networks for Early COVID-19 Detection in Chest X-ray Images
 
 
Font Type:
Arial Georgia Verdana
Font Size:
Aa Aa Aa
Line Spacing:
Column Width:
Background:
Article

A Linearization to the Sum of Linear Ratios Programming Problem

Department of Mathematical Sciences, Faculty of Science & Technology, UKM Bangi, Selangor 43600, Malaysia
*
Author to whom correspondence should be addressed.
Mathematics 2021, 9(9), 1004; https://doi.org/10.3390/math9091004
Submission received: 26 February 2021 / Revised: 9 April 2021 / Accepted: 23 April 2021 / Published: 29 April 2021

Abstract

:
Optimizing the sum of linear fractional functions over a set of linear inequalities (S-LFP) has been considered by many researchers due to the fact that there are a number of real-world problems which are modelled mathematically as S-LFP problems. Solving the S-LFP is not easy in practice since the problem may have several local optimal solutions which makes the structure complex. To our knowledge, existing methods dealing with S-LFP are iterative algorithms that are based on branch and bound algorithms. Using these methods requires high computational cost and time. In this paper, we present a non-iterative and straightforward method with less computational expenses to deal with S-LFP. In the method, a new S-LFP is constructed based on the membership functions of the objectives multiplied by suitable weights. This new problem is then changed into a linear programming problem (LPP) using variable transformations. It was proven that the optimal solution of the LPP becomes the global optimal solution for the S-LFP. Numerical examples are given to illustrate the method.

1. Introduction

Optimizing the sum of linear fractional functions over a set of linear inequalities (S-LFP) is considered as a branch of a fractional programming problem with a wide variety of applications in different disciplines such as transportation, economics, investment, control, bond portfolio, and more specifically in cluster analysis, multi-stage shipping problems, queueing location problems, and hospital fee optimization [1,2,3,4,5,6,7,8,9,10].
In optimization, if the objective function of a problem is strictly convex, then its local minimizer is also a unique global. In the literature, it has been of interest to find conditions so that a local minimizer becomes also global. On this subject, we mention the studies of Mititelu [11], and Trată et al. [12]. Schaible demonstrated that the S-LFP is a global optimization problem [9]; this means that the problem has one or more local optimal solutions that cause some difficulties to find the global optimal solution. In addition, he proved that the sum of linear ratios is neither quasiconcave nor quasiconvex. In [13], Freund and Jarre showed that the problem is N-P hard. Thus, working on this kind of problem is important and beneficial.
Linear fractional programming (LFP) is a specific class of S-FLP. The best method to deal with LFP was proposed by Charnes and Cooper [14]. They showed that an LFP can be changed into an equivalent linear programming (LP). In [15], Cambini et al. introduced an iterative algorithm to deal with the sum of a linear ratio and a linear objective over a polyhedral. They proved that an optimal solution exists on the boundary of the feasible region. In [16], Almogy and Levin determined the sum-of-ratios to the sum-of-non-ratios by using the methodology introduced by Dinkelbach [17]. However, Falk and Palocsay [7] showed the proposed method by Almogy and Levin does not always come out with the global optimal solutions. In [7], an iterative method was also introduced to S-LFP in which linear programming is solved over the image of the feasible region in iterations. According to [18], missing rigorous evidence to prove the convergence is a drawback of their approach. In [19], an outer approximation algorithm for generalized convex multiplicative programming problems was proposed. The iterative approach can be used to address the sum of linear ratios. An iterative practical method on the basis of a branch and bound algorithm to solve low rank linear fractional programming problems was introduced by Konno and Fukaishi [20] where the method’s performance is much better than the other reported algorithms theretofore. Dür et al. [21] proposed an algorithm based on a branch and bound procedure to tackle S-LFP. To construct the method, rectangular partitions in the Euclidean space were utilized. In [22], Benson presented and also showed the convergence of an algorithm to find a global optimal solution to S-LFP. The algorithm was designed based on a branch and bound search procedure by primarily focusing on solving an equivalent outcome space version of the problem. In [23], Kuno developed a branch and bound algorithm to maximize the sum of k linear ratios on a polytope where the denominators and numerators were positive and non-negative, respectively. In the method, the problem was set into a 2 k —dimensional space in order to construct bounds on the optimal solution. In addition, the usual rectangular branch-and –bound method was used in a k —dimensional space. The convergence properties of the approach were demonstrated. Motivated by Kuno, Benson [24,25] presented branch and bound based algorithms to reach global optimal solutions for S-LFP. According to the theory of monotonic optimization introduced by Tuy [26], Phuong and Tuy [27] presented an iterative efficient unified method to address a wide category of generalized LFPPs. In [28], Benson presented and validated a simplicial branch and bound duality-bounds algorithm to find the global optimal solution for S-LFP. In the method, to compute the lower bounds for the branch and bound procedure, linear programming problems are derived by using Lagrangian duality theory. In [29], Wang and Shen presented an iterative-based branch and bound algorithm to S-LFP in which LPPs are solved in iterations. Later, by solving an example, we show that their method cannot be considered as a global optimization method. In the literature, several iterative methods have recently been introduced to address S-LFP [30,31,32].
As we mentioned above, the methods proposed to S-LFP are iterative algorithms and most of them are constructed based on branch and bound algorithms. For the first time in the literature, in this paper, a non-iterative method was proposed to address S-LFP. In other words, we transformed the S-LFP into LPP. To do this, first, the membership functions of the linear ratios are specified after identifying the maxima and the minima of the ratios over the feasible region. Indeed, using membership functions allows the proposed method to cover almost all problems which are modelled as an S-LFP. Afterwards, it was proven that there exists a combination of the membership functions such that optimizing this combination yields the global optimal solution of the main problem. Finally, the problem of optimizing the combination of the membership functions is changed into a LPP using suitable variable transformations. This proves that the optimal solution of the LPP is optimum for the S-LFP.
This article is organized into four sections. The main results are given in Section 2. In this section, we demonstrated how an S-LFP is changed into an LPP. In Section 3, numerical examples taken from different references are solved to illustrate the approach and also make comparisons. In Section 4, we conclude the paper.

2. Main Results

In this section, we show that the S-LFP can be changed into a weighted LPP where for some values of weights, the optimal solution of the LPP becomes a global optimal solution for the S-LFP.
Considering the general form of the S-LFP as follows:
Maximize   F ( X ) = i = 1 k F i ( X ) = i = 1 k f i ( X ) g i ( X ) = i = 1 k N i T X + m i P i T X + q i . s . t     S = { A X b ,   X 0 } ,
where S is a regular set, i.e., a bounded and non-empty set, and g i ( X ) > 0 ,   X = ( X 1 , , X n ) S ,   i = 1 , k .
Remark 1.
Since g i ( X ) = P i T X + q i is a continuous function, then g i ( X ) 0 implies either g i ( X ) > 0 or g i ( X ) < 0 ,   X S . If g i ( X ) < 0 , then we reach a fraction with a positive denominator by replacing f p ( X ) g p ( X ) with f p ( X ) g p ( X ) ; this means that the restriction g i ( X ) > 0 can be equivalently substituted by g i ( X ) 0 , i = 1 , , k . In fact, the only limitation considered in this paper is g i ( X ) 0 , X S , i = 1 , , k .
Remark 2.
If M i n i m i z e X S g i ( X ) > 0 ,   then g i ( X ) > 0 , X S ,   i = 1 , , k . Otherwise, g i ( X ) < 0 , X S ,   i = 1 , , k . In addition, if M a x i m i z e X S g i ( X ) < 0 ,   then g i ( X ) < 0 , X S ,   i = 1 , , k . Otherwise, g i ( X ) > 0 , X S ,   i = 1 , , k .
Therefore, as to design our method to reach the global optimal solutions, we need f i ( X ) = N i T X + m i 0 ,   X S ,   i = 1 , k . However, this is a restrictive condition to impose; this means a limited number of problems can be solved. To overcome this difficulty, we used the concept of the membership functions.
In (1), let F i m a x = Maximize X S F i ( X ) , and F i m i n = Minimize X S F i ( X ) , which are obtained using the method of Charnes and Cooper [14]. Now, the membership function related to F i ( X ) is specified as follows:
μ i ( X ) = 1 F i m a x F i m i n ( N i T X + m i P i T X + q i F i m i n ) = C i T X + d i P i T X + q i ,   X S ,
where C i = ( 1 F i m a x F i m i n N i F i m i n P i ) and d i = ( m i F i m a x F i m i n F i m i n q i ) ,   i = 1 , , k .
Since μ i ( X ) [ 0 , 1 ] , and P i T X + q i > 0 , then C i T X + d i 0 ,   X S ,   i = 1 , , k .
Consider the following problem constructed on the basis of the membership functions:
max X S imize i = 1 k w i μ i ( X ) = i = 1 k w i C i T X + d i P i T X + q i
where w i 0 ,   i = 1 , k is the weight assigned to the i t h membership function so as to the optimal solution of (2) becomes optimal for (1). For example, let 0 F 1 ( X ) 1000 , and 0 F 2 ( X ) 100 , X S . Moreover, let w 1 = w 2 = 1 , and X ¯ be the optimal solution of Maximize   X S ( μ 1 ( X ) + μ 2   ( X ) ) with μ 1 ( X ¯ ) = 0.75 , μ 2 ( X ¯ ) = 0.75 . Therefore, F 1 ( X ¯ ) + F 2 ( X ¯ ) = 0.75 × 1000 + 0.75 × 100 = 825 . Now, we let w 1 = 1 ,   w 2 = 0 , and X ^ be the optimal solution of Maximize   ( X S μ 1 ( X ) + 0 × μ 2 ( X ) ) with μ 1 ( X ^ ) = 1 , μ 2 ( X ^ ) = 0.2 . Therefore, F 1 ( X ^ ) + F 2 ( X ^ ) = 1 × 1000 + 0.2 × 100 = 1020 . Three points can be deduced from this example:
Point 1.
Inequality i = 1 k w ¯ i μ i ( X ¯ ) > i = 1 k w ^ i μ i ( X ^ ) does not conclude 1 = 1 k F i ( X ¯ ) > i = 1 k F i ( X ^ ) .
Point 2.
The important role of weights of (2) in determining the optimal solution of (1).
Point 3.
The optimal solution of m a x X S i m i z e F j ( X ) may be the optimal solution of m a x i m i z e X S i = 1 k F i ( X ) , where ( F j m a x F j m i n ) ( F i m a x F i m i n ) , j { 1 , , k } . Without any extra computational cost, applying Point 3 can help us achieve the optimal global solution.
Then, Lemma 1 explains how to determine the appropriate weights.
Lemma 1.
Let X * be the optimal solution of (2) for w i = F i m a x F i m i n ,   i = 1 , , k , then X * is also optimal solution for (1).
Proof of Lemma 1.
Since X * is optimum for (2) with w i = F i m a x F i m i n ,   i = 1 , , k , then:
i = 1 k F i m a x F i m i n F i m a x F i m i n ( N i T X * + m i P i T X * + q i F i m i n ) i = 1 k F i m a x F i m i n F i m a x F i m i n ( N i T X + m i P i T X + q i F i m i n ) ,   X S .
(3)   i = 1 k ( N i T X * + m i P i T X * + q i ) i = 1 k ( N i T X + m i P i T X + q i ) ,   X S . This completes the proof. □
The N P hard problem (2) is changed into a linear programming problem in what follows. Let us define the new variable λ as a function of X as follows:
λ = min { 1 P i T X + q i ,   i = 1 , , k } ,   and   Y = λ X ,
and then proceed to the following problem:
Maximize   i = 1 k w i ( C i T Y + λ d i ) s . t           F = { A Y λ b 0 ,       Y ,   λ 0 ,     P i T Y + λ q i 1 ,   i = 1 , , k } .
Lemma 2.
In (5), variable λ cannot be zero.
Proof of Lemma 2.
Let there exist ( Y ^ ,   0 ) F . Therefore, we have: A Y ^ 0 . Now, assume that X ^ S . Thus, X ^ + β Y ^ S for β 0 . This means S is an unbounded set. This is a contradiction to the regularity of S .  □
Lemma 3.
If ( Y ¯ ,   λ ¯ ) F , then Y ¯ λ ¯ S .
Proof of Lemma 3.
Since ( Y ¯ ,   λ ¯ ) F , then Y ¯ 0 ,   λ ¯ > 0 , and A Y ¯ λ ¯ b 0 . Therefore,   A Y ¯ λ ¯ b = 1 λ ¯ ( A Y ¯ λ ¯ b ) 0 ; it means A Y ¯ λ ¯ b .  □
To show that (5) can be equivalently considered instead of (2), the following theorem is proved.
Theorem 1.
Let ( Y * ,   λ * ) be the optimal solution of (5), then X * = Y * λ * is optimum for (2).
Proof of Theorem 1.
Let X * not be optimum for (2). Therefore, X ¯ S such that:
i = 1 k w i ( C i T X ¯ + d i P i T X ¯ + q i ) > i = 1 k w i ( C i T X * + d i P i T X * + q i ) .
Let us define:
λ ¯ i = 1 P i T X ¯ + q i , λ i * = 1 P i T ( X * ) + q i ,   i = 1 , , k .
Since ( Y * ,   λ * ) F , then it follows from (4) that:
λ * = min { 1 P i T X * + q i = λ i * ,   i = 1 , , k } λ i * ,   i = 1 , , k .
Now, (6,7,8) ⇒
i = 1 k λ ¯ i w i ( C i T X ¯ + d i ) > λ * ( i = 1 k w i ( C i T X * + d i ) )
Let we define:
θ ¯ = max { λ ¯ i , for i = 1 , , k } , and λ ¯ = θ ¯ ϵ , where:
θ ¯ λ ¯ i ϵ < θ ¯ λ * ( i = 1 k w i ( C i T X * + d i ) i = 1 k w i ( C i T X ¯ + d i ) ) ,   i = 1 , , k .
First, we need to show that (10) is well defined. In other words, there must exist ϵ which satisfies (10). To do this, the two conditions below must hold true:
(I)
i = 1 k w i ( C i T X ¯ + d i ) > 0 .
(II)
θ ¯ λ ¯ i θ ¯ λ * ( i = 1 k w i ( C i T X * + d i ) i = 1 k w i ( C i T X ¯ + d i ) ) , i = 1 , , k .
In the following, (I) and (II) are verified.
Since μ i ( X ) = C i T X + d i P i T X + q i [ 0 , 1 ] ,   P i T X + q i > 0 , then C i T X + d i 0 ,   X S . In order to establish (6), there must exist j { 1 , , k } such that:
w j μ j ( X ¯ ) = w j C j T X ¯ + d j P j T X ¯ + q j > w j C j T X * + d j P j T X * + q j = w j μ j ( X * ) .
This directly follows from (11) that if μ j ( X ¯ ) = 0 , then μ j ( X * ) < 0 . This contradicts the non-negativity of membership functions. Thus, w j μ j ( X ¯ ) is positive; this means i = 1 k w i ( C i T X ¯ + d i ) > 0 . Therefore, (I) is verified.
In order for (II) to be true, we need to have:
λ ¯ i λ * ( i = 1 k w i ( C i T X * + d i ) i = 1 k w i ( C i T X ¯ + d i ) ) , i = 1 , , k .
By contradiction, let p { 1 , , k } such that:
λ ¯ p < λ * ( i = 1 k w i ( C i T X * + d i ) i = 1 k w i ( C i T X ¯ + d i ) ) .
Moreover, let it be possible that:
λ ¯ p = max { λ ¯ i ,             i = 1 , , k } .
Thus, (12) and (13) i = 1 k λ ¯ i w i ( C i T X ¯ + d i ) λ ¯ p ( i = 1 k w i ( C i T X ¯ + d i ) ) < λ * ( i = 1 k w i ( C i T X * + d i ) ) . This contradicts (9). Therefore, (II) is verified.
It is time to show:
λ ¯ ( P i T X ¯ + q i ) 1 ,   i = 1 , , k .
To do this:
In (10), it is implied that θ ¯ ϵ λ ¯ i . Furthermore, according to the definitions θ ¯ = max { λ ¯ i , for i = 1 , , k } , λ ¯ = θ ¯ ϵ , and λ ¯ i = 1 P i T X ¯ + q i , i = 1 , , k , it is concluded:
λ ¯ ( P i T X ¯ + q i ) = ( θ ¯ ϵ ) ( P i T X ¯ + q i ) λ ¯ i ( P i T X ¯ + q i ) = 1 ,   i = 1 , , k . Thus, (III) is demonstrated.
Now, let us define Y ¯ = λ ¯ X ¯ . To show ( Y ¯ ,   λ ¯ ) F , the following must be true:
(a)
λ ¯ 0 .
Due to (10), sup ϵ = θ ¯ λ * ( i = 1 k w i ( C i T X * + d i ) i = 1 k w i ( C i T X ¯ + d i ) ) . As the result, λ ¯ θ ¯ sup   ϵ = λ * ( i = 1 k w i ( C i T X * + d i ) i = 1 k w i ( C i T X ¯ + d i ) ) 0 .
(b)
Y ¯ 0 .
Since X ¯ S , then X ¯ 0 . Consequently, Y ¯ = λ ¯ X ¯ 0 .
(c)
( P i T Y ¯ + λ ¯ q i ) 1 ,   i = 1 , , k .
Considering Y ¯ = λ ¯ X ¯ and (III) results in c .
(d)
A Y ¯ λ ¯ b 0 .
X ¯ S A X ¯ b 0 . Therefore, A Y ¯ λ ¯ b = λ ¯ ( A X ¯ b ) 0 .
It is shown that ( Y ¯ , λ ¯ ) created above contradicts the optimality of ( Y * ,   λ * ) for (5) in what follows.
According to (10), we have:
ϵ < θ ¯ λ * ( i = 1 k w i ( C i T X * + d i ) i = 1 k w i ( C i T X ¯ + d i ) )
It follows directly from (14) that:
λ * ( i = 1 k w i ( C i T X * + d i ) ) < ( θ ¯ ϵ ) ( i = 1 k w i ( C i T X ¯ + d i ) )
Since Y ¯ = λ ¯ X ¯ , Y * = λ * X * , the followings two equations are directly concluded:
i = 1 k w i ( C i T Y * + λ * d i ) = λ * ( i = 1 k w i ( C i T X * + d i ) ) ,
λ ¯ ( i = 1 k w i ( C i T X ¯ + d i ) ) = i = 1 k w i ( C i T Y ¯ + λ ¯ d i ) .
Since λ ¯ = θ ¯ ϵ , then (15)–(17) implies:
i = 1 k w i ( C i T Y * + λ * d i ) = λ * ( i = 1 k w i ( C i T X * + d i ) ) < ( θ ¯ ϵ ) ( i = 1 k w i ( C i T X ¯ + d i ) ) = λ ¯ ( i = 1 k w i ( C i T X ¯ + d i ) ) = i = 1 k w i ( C i T Y ¯ + λ ¯ d i ) .
Directly from (18), we have:
i = 1 k w i ( C i T Y * + λ * d i ) < i = 1 k w i ( C i T Y ¯ + λ ¯ d i ) . This contradicts the optimality of ( Y * ,   λ * ) for (5). Therefore, X * = Y * λ * is optimum for (2). □

3. Numerical Example

To illustrate this method and also make a comparison, numerical examples taken from different references are considered. In addition, the solutions of the references, the results of our proposed method are compared with the GA (genetic algorithm) of the Global Optimization Toolbox of MATLAB R2016b.
Example 1
([29]).
Maximize   F ( X ) = F 1 ( X ) + F 2 ( X ) + F 3 ( X ) + F 4 ( X ) = 4 X 1 3 X 2 3 X 3 50 3 X 2 + 3 X 3 + 50 + 3 X 1 4 X 3 50 4 X 1 + 4 X 2 + 5 X 3 + 50 + X 1 2 X 2 4 X 3 50 X 1 + 5 X 2 + 5 X 3 + 50 + X 1 2 X 2 4 X 3 50 5 X 2 + 4 X 3 + 50 s . t                   S = { 2 X 1 + X 2 + 5 X 3 10 , X 1 + 6 X 2 + 2 X 3 10 , 9 X 1 7 X 2 3 X 3 10 , X 1 ,   X 2 ,   X 3 0 } .
Information related to (19) including the maxima, minima, range, and the membership functions of the objectives are listed in Table 1.
The (5) is formulated for (19) with w i = F i m a x F i m i n ,   i = 1 , , 4 as follows:
Maximize   ( 10 × 0.4 + 9.4486 × 0.1 5.385 × 0.2 ) Y 1 + ( 3 × 0.4 + 40.3128 × 0.1 + 35.0058 × 0.1 + 18.8476 × 0.2 ) Y 2 + ( 3 × 0.4 + 9.238 × 0.1 + 11.6686 × 0.1 + 2.154 × 0.2 ) Y 3 + ( 50 × 0.4 10.4938 × 0.1 + 26.9251 × 0.2 ) λ = 4.1321 Y 1 + 12.5014 Y 2 + 3.7215 Y 3 + 24.3356 λ s . t       F = { 2 Y 1 + Y 2 + 5 Y 3 10 λ 0 ,             Y 1 + 6 Y 2 + 2 Y 3 10 λ 0 , 9 Y 1 7 Y 2 3 Y 3 + 10 λ 0 , 3 Y 2 + 3 Y 3 + 50 λ 1 , 4 Y 1 + 4 Y 2 + 5 Y 3 + 50 λ 1 , Y 1 + 5 Y 2 + 5 Y 3 + 50 λ 1 , 5 Y 2 + 4 Y 3 + 50 λ 1 , Y 1 ,   Y 2 ,   Y 3 ,   λ 0 } .
The (20) is solved and the solution obtained is: ( Y * , λ * ) = ( 0 ,   0.0286 ,   0 ,   0.0171 ) . The results are summarized in Table 2.
Example 2
([33]).
Maximization   F ( X ) = F 1 ( X ) + F 2 ( X ) + F 3 ( X ) + F 4 ( X ) = 37 X 1 + 73 X 2 + 13 13 X 1 + 13 X 2 + 13 + 63 X 1 + 18 X 2 39 13 X 1 + 26 X 2 + 13 + 13 X 1 + 13 X 2 + 13 63 X 1 18 X 2 + 39 + 13 X 1 26 X 2 13 37 X 1 + 73 X 2 + 13 s . t     S = { 5 X 1 3 X 2 = 3 , 1.5 X 1 3 ,     X 2 0 } .
Maxima and minima of the objectives were obtained and the membership functions are specified and shown in Table 3.
In the following, (5) is formulated for (21) with w i = F i m a x F i m i n , i = 1 , , 4 :
Maximize   71.4271 Y 1 + 98.6811 Y 2 70.0807 λ s . t     F = { 5 Y 1 3 Y 2 3 λ = 0 , Y 1 3 λ 0 ,               Y 1 + 1.5 λ 0 , 13 Y 1 + 13 Y 2 + 13 λ 1 ,           13 Y 1 + 26 Y 2 + 13 λ 1 ,           63 Y 1 18 Y 2 + 39 λ 1 , 37 Y 1 + 47 Y 2 + 13 λ 1 , Y 1 ,   Y 2 ,   λ 0 } .
The (22) is solved and the solution obtained is: ( Y * ,   λ * ) = ( 0.0072 ,   0.0096 ,   0.0026 ) . The results are summarized in Table 4.
Example 3
([28]).
Maximize       F ( X ) = F 1 ( X ) + F 2 ( X ) = 3.333 X 1 + 3 X 2 + 1 1.6666 X 1 + X 2 + 1 + 4 X 1 + 3 X 2 + 1 X 1 + X 2 + 1 s . t     S = { 5 X 1 + 4 X 2 10 ,       2 X 1 X 2 2 ,       X 1 0.1 ,     X 2 0.1 , X 1 , X 2 0 } .
The data related to (23) are summarized in Table 5.
The (5) is formulated for (23) with F i m a x F i m i n , i = 1 , 2 as below:
Maximize       3.4058 Y 1 + 2.8878 Y 2 3.0582 λ s . t     F = { 5 Y 1 + 4 Y 2 10 λ 0 ,     2   Y 1 Y 2 + 2 λ 0 ,   Y 1 + 0.1 λ 0 , Y 2 + 0.1   λ 0 , 1.666 Y 1 + Y 2 + λ 1 ,       Y 1 + Y 2 + λ 1 , Y 1 ,   Y 2 ,   λ 0 } .
The (24) is solved and the solution obtained is: ( Y * ,   λ * ) = ( 0.0282 ,   0.6706 ,   0.2824 ) . The results are summarized in Table 6.

4. Conclusions and Discussion

In this paper, we made the S-LFP into a weighted LPP. We then proved that for some weights, which are specified according to the range of the linear ratios, the optimal solution of the LPP is the global optimal solution for the S-LFP. To design our method, we used the membership functions of the objectives instead of the objectives. This replacement was able us to cover all the problems in the form of S-LFP except a problem in which a denominator becomes zero at a feasible point. Numerical examples were solved and comparisons were made. The results demonstrate that our method with less expenses and complexities reached the global optimal solutions successfully. Numerical examples showed that GA and the proposed method of [32] are reliable enough to be used dealing with the S-LFP. However, the method of Shen and Wang [29] cannot be considered as a global optimization technique because their solution was dominated by the other methods for Example 1.
Since the S-LFP is N-P hard and may also have several local optimal solutions, it is possible that any global optimization technique ultimately reaches a local optimal solution instead of a global one. Therefore, we recommend that in addition to any technique used, the optimal solution of the most important part of the objective function be taken into account (see Point 1).

Author Contributions

M.B. carried out the methodology, investigation, and writing the draft. A.S.R. supervised the research, and edited and reviewed the final draft. All authors have read and agreed to the published version of the manuscript.

Funding

This work was supported by ST-2019-016.

Institutional Review Board Statement

Not applicable.

Informed Consent Statement

Not applicable.

Data Availability Statement

Not applicable.

Conflicts of Interest

We declare there is no conflict of interest.

References

  1. Colantoni, C.S.; Manes, R.P.; Whinston, A. Programming, Profit Rates and Pricing Decisions. Account. Rev. 1969, 44, 467–481. [Google Scholar]
  2. Almogy, Y.; Levin, O. Parametric Analysis of a Multi-Stage Stochastic Shipping Problem. Oper. Res. Int. J. 1970, 69, 359–370. [Google Scholar]
  3. Rao, M.R. Cluster Analysis and Mathematical Programming. J. Am. Stat. Assoc. 1971, 66, 622–626. [Google Scholar] [CrossRef]
  4. Konno, H.; Inori, M. Bond Portfolio Optimization by Bilinear Fractional Programming. J. Oper. Res. Soc. Jpn. 1989, 32, 143–158. [Google Scholar] [CrossRef]
  5. Drezner, Z.; Schaible, S.; Simchi-Levi, D. Queueing Location Problems on the Plane. Nav. Res. Logist. 1990, 37, 929–935. [Google Scholar] [CrossRef]
  6. Zhang, S. Stochastic Queue Location Problems. Doctoral Dissertation, Econometric Institute Erasmus University, Rotterdam, The Netherlands, 1991. [Google Scholar]
  7. Falk, J.E.; Palocsay, S.W. Optimizing the Sum of Linear Fractional Functions. In Recent Advances in Global Optimization; Princeton University Press: Princeton, NJ, USA, 1991; pp. 221–258. [Google Scholar]
  8. Mathis, F.H.; Mathis, L.J. A Nonlinear Programming Algorithm for Hospital Management. SIAM Rev. 1995, 37, 230–234. [Google Scholar] [CrossRef]
  9. Schaible, S. Fractional Programming. In Handbook of Global Optimization. Nonconvex Optimization and Its Applications; Horst, R., Pardalos, P.M., Eds.; Springer: Boston, MA, USA, 1995; Volume 2. [Google Scholar]
  10. Sawik, B. Downside Risk Approach for Multi-Objective Portfolio Optimization. In Operations Research Proceedings; Metzler, J.B., Ed.; Springer: Berlin/Heidelberg, Germany, 2012; pp. 191–196. [Google Scholar]
  11. Mititelu, S.T. Efficiency and Duality for Multiobjective Fractional Variational Problems with (ρ, b)-Quasiinvexity. Yu J. Oper. Res. 2016, 19, 85–99. [Google Scholar] [CrossRef]
  12. Treanţă, S.; Arana-Jiménez, M.; Antczak, T. A Necessary and Sufficient Condition on the Equivalence Between Local and Global Optimal Solutions in Variational Control Problems. Nonlinear Anal. Theory Methods Appl. 2020, 191, 111640. [Google Scholar] [CrossRef]
  13. Freund, R.W.; Jarre, F. Solving the Sum-of-Ratios Problem by an Interior-Point Method. J. Glob. Optim. 2001, 19, 83–102. [Google Scholar] [CrossRef]
  14. Charnes, A.; Cooper, W.W. Programming with Linear Fractional Functionals. Nav. Res. Logist. Q. 1962, 9, 181–186. [Google Scholar] [CrossRef]
  15. Cambini, A.; Martein, L.; Schaible, S. On Maximizing a Sum of Ratios. J. Inf. Optim. Sci. 1989, 10, 65–79. [Google Scholar] [CrossRef]
  16. Almogy, Y.; Levin, Ö. A Class of Fractional Programming Problems. Oper. Res. 1971, 19, 57–67. [Google Scholar] [CrossRef]
  17. Dinkelbach, W. On Nonlinear Fractional Programming. Manag. Sci. 1967, 13, 492–498. [Google Scholar] [CrossRef]
  18. Schaible, S.; Shi, J. Fractional Programming: The Sum-of-Ratios Case. Optim. Methods Softw. 2003, 18, 219–229. [Google Scholar] [CrossRef]
  19. Konno, H.; Kuno, T.; Yajima, Y. Global Minimization of a Generalized Convex Multiplicative Function. J. Glob. Optim. 1994, 4, 47–62. [Google Scholar] [CrossRef]
  20. Konno, H.; Fukaishi, K. A Branch and Bound Algorithm for Solving Low Rank Linear Multiplicative and Fractional Programming Problems. J. Glob. Optim. 2000, 18, 283–299. [Google Scholar] [CrossRef]
  21. Dür, M.; Horst, R.; Van Thoai, N. Solving Sum-of-Ratios Fractional Programs Using Efficient Points. Optimization 2001, 49, 447–466. [Google Scholar] [CrossRef]
  22. Benson, H.P. Global Optimization of Nonlinear Sums of Ratios. J. Math. Anal. Appl. 2001, 263, 301–315. [Google Scholar] [CrossRef] [Green Version]
  23. Kuno, T. A Branch-and-Bound Algorithm for Maximizing the Sum of Several Linear Ratios. J. Glob. Optim. 2002, 22, 155–174. [Google Scholar] [CrossRef]
  24. Benson, H.P. Using Concave Envelopes to Globally Solve the Nonlinear Sum of Ratios Problem. J. Glob. Optim. 2002, 22, 343–364. [Google Scholar] [CrossRef]
  25. Benson, H.P. Global Optimization Algorithm for the Nonlinear Sum of Ratios Problem. J. Optim. Theory Appl. 2002, 112, 1–29. [Google Scholar] [CrossRef]
  26. Tuy, H. Monotonic Optimization: Problems and Solution Approaches. SIAM J. Optim. 2000, 11, 464–494. [Google Scholar] [CrossRef]
  27. Phuong, N.T.H.; Tuy, H. A Unified Monotonic Approach to Generalized Linear Fractional Programming. J. Glob. Optim. 2003, 26, 229–259. [Google Scholar] [CrossRef]
  28. Benson, H.P. A Simplicial Branch and Bound Duality-Bounds Algorithm for the Linear Sum-of-Ratios Problem. Eur. J. Oper. Res. 2007, 182, 597–611. [Google Scholar] [CrossRef]
  29. Wang, C.F.; Shen, P.P. A Global Optimization Algorithm for Linear Fractional Programming. Appl. Math. Comput. 2008, 204, 281–287. [Google Scholar] [CrossRef]
  30. Shen, P.; Zhang, T.; Wang, C. Solving a Class of Generalized Fractional Programming Problems Using the Feasibility of Linear Programs. J. Inequalities Appl. 2017, 2017, 147. [Google Scholar] [CrossRef] [Green Version]
  31. Shen, P.-P.; Lu, T. Regional Division and Reduction Algorithm for Minimizing the Sum of Linear Fractional Functions. J. Inequalities Appl. 2018, 2018, 1–19. [Google Scholar] [CrossRef] [Green Version]
  32. Liu, X.; Gao, Y.; Zhang, B.; Tian, F. A New Global Optimization Algorithm for a Class of Linear Fractional Programming. Mathematics 2019, 7, 867. [Google Scholar] [CrossRef] [Green Version]
  33. Shen, P.-P.; Wang, C.-F. Global Optimization for Sum of Linear Ratios Problem with Coefficients. Appl. Math. Comput. 2006, 176, 219–229. [Google Scholar] [CrossRef]
Table 1. Maxima and minima of F i ( X ) over feasible region S and the membership functions for (19).
Table 1. Maxima and minima of F i ( X ) over feasible region S and the membership functions for (19).
i F i m i n F i m a x F i m a x F i m i n μ i ( X )
1 1.4 1 0.4 10 X 1 + 3 X 2 + 3 X 3 + 50 3 X 2 + 3 X 3 + 50
2 0.9796 0.8824 0.1 9.4486 X 1 + 40.3128 X 2 + 9.2387 X 3 10.4938 4 X 1 + 4 X 2 + 5 X 3 + 50
3 1 0.9143 0.1 35.0058 X 2 + 11.6686 X 3 X 1 + 5 X 2 + 5 X 3 + 50
4 1.1 0.9143 0.2 5.385 X 1 + 18.8476 X 2 + 2.154 X 3 + 26.9251 5 X 2 + 4 X 3 + 50
Table 2. Optimal solution and optimal value for (19) obtained by different methods.
Table 2. Optimal solution and optimal value for (19) obtained by different methods.
Method of X * F ( X * ) Iter
This article ( 0 ,   1.6667 ,   0 ) 3.711 Non-iterative
[29] ( 0 ,   0.625 ,   1.875 ) 4 32
[32] ( 0 ,   1.6667 ,   0 ) 3.711 169
GA ( 0.0113 ,   1.257 ,   1.1059 ) 3.7213 81
Table 3. Maxima and minima of F i ( X ) over feasible region S and the membership functions for (21).
Table 3. Maxima and minima of F i ( X ) over feasible region S and the membership functions for (21).
i F i m i n F i m a x F i m a x F i m i n μ i ( X )
1 3.4231 4 0.58 13.001 X 1 + 49.4015 X 2 54.6027 13 X 1 + 13 X 2 + 13
2 1.0699 0.8077 0.26 187.2285 X 1 + 174.7422 X 2 95.6953 13 X 1 + 26 X 2 + 13
3 0.483 0.6667 0.18 94.8775 X 1 + 118.0947 X 2 31.7746 63 X 1 18 X 2 + 39
4 0.4017 0.3750 0.78 2.3985 X 1 + 4.2798 X 2 10.0140 37 X 1 + 73 X 2 + 13
Table 4. Optimal solution and optimal value for (21) obtained by different methods.
Table 4. Optimal solution and optimal value for (21) obtained by different methods.
Method of X * F ( X * ) Iter
This article ( 3 ,   4 ) 3.2916 Non-iterative
[33] ( 3 ,   4 ) 3.2916 9
[32] ( 3 ,   4 ) 3.2916 693
GA ( 3 ,   4 ) 3.2916 54
Table 5. Maxima and minima of F i ( X ) over feasible region S and the membership functions for (23).
Table 5. Maxima and minima of F i ( X ) over feasible region S and the membership functions for (23).
i F i m i n F i m a x F i m a x F i m i n μ i ( X )
1 1.6649 2.3883 0.72 0.7731 X 1 + 1.8456 X 2 0.9192 1.6666 X 1 + X 2 + 1
2 2.3448 2.9735 0.63 2.6327 X 1 + 1.0422 X 2 2.1390 X 1 + X 2 + 1
Table 6. Optimal solution and optimal value for (23) obtained by different methods.
Table 6. Optimal solution and optimal value for (23) obtained by different methods.
Method of X * F ( X * ) Iter
This article ( 0.1 ,   2.3750 ) 4.8415 Non-iterative
[28] ( 0.1 ,   2.3750 ) 4.8415 4
GA ( 0.1 ,   2.3750 ) 4.8415 82
Publisher’s Note: MDPI stays neutral with regard to jurisdictional claims in published maps and institutional affiliations.

Share and Cite

MDPI and ACS Style

Borza, M.; Rambely, A.S. A Linearization to the Sum of Linear Ratios Programming Problem. Mathematics 2021, 9, 1004. https://doi.org/10.3390/math9091004

AMA Style

Borza M, Rambely AS. A Linearization to the Sum of Linear Ratios Programming Problem. Mathematics. 2021; 9(9):1004. https://doi.org/10.3390/math9091004

Chicago/Turabian Style

Borza, Mojtaba, and Azmin Sham Rambely. 2021. "A Linearization to the Sum of Linear Ratios Programming Problem" Mathematics 9, no. 9: 1004. https://doi.org/10.3390/math9091004

APA Style

Borza, M., & Rambely, A. S. (2021). A Linearization to the Sum of Linear Ratios Programming Problem. Mathematics, 9(9), 1004. https://doi.org/10.3390/math9091004

Note that from the first issue of 2016, this journal uses article numbers instead of page numbers. See further details here.

Article Metrics

Back to TopTop