A New Global Optimization Algorithm for a Class of Linear Fractional Programming

: In this paper, we propose a new global optimization algorithm, which can better solve a class of linear fractional programming problems on a large scale. First, the original problem is equivalent to a nonlinear programming problem: It introduces p auxiliary variables. At the same time, p new nonlinear equality constraints are added to the original problem. By classifying the coefﬁcient symbols of all linear functions in the objective function of the original problem, four sets are obtained, which are I + i , I − i , J + i and J − i . Combined with the multiplication rule of real number operation, the objective function and constraint conditions of the equivalent problem are linearized into a lower bound linear relaxation programming problem. Our lower bound determination method only needs e Ti x + f i (cid:54) = 0, and there is no need to convert molecules to non-negative forms in advance for some special problems. A output-space branch and bound algorithm based on solving the linear programming problem is proposed and the convergence of the algorithm is proved. Finally, in order to illustrate the feasibility and effectiveness of the algorithm, we have done a series of numerical experiments, and show the advantages and disadvantages of our algorithm by the numerical results.


Introduction
Fractional programming is an important branch of nonlinear optimization and it has attracted interest from researchers for several decades.The sum of linear ratios problem is a special class of fractional programming problem with wide applications, such as for transportation schemes, as well as finding applications in economics [1], investment and production control [2][3][4], and multi-objective portfolios [5].The primary challenges in solving linear fractional programming (LFP) arise from a lack of useful properties (convexity or otherwise) and from the number of ratios and the dimension of decision space.Theoretically, it is NP-hard [6,7].In addition, for a problem LFP, there may be several local optimal solutions [8], which interferes with finding the global optimal solution and increases the difficulty of the problem.It is therefore worthwhile to study this kind of problem.In this paper, we shall investigate the following linear fractional programming problem: where the feasible domain X = {x ∈ R n |Ax ≤ b, x ≥ 0} is n-dimensional, nonempty, and bounded; p ≥ 2, A ∈ R m×n , b ∈ R m , c i ∈ R n , d i ∈ R, e i ∈ R n , f i ∈ R and e T i x + f i = 0.In the application of practical problems, p usually does not exceed 10.At present, many algorithms have been proposed to solve the LFP problem with a limited number of ratios.For instance, In 1962, Charnes et al. gave an effective elementary simplex method in the case of p = 1 [9].On the premise of p = 2, Konno proposed one similar parametric elementary simplex method on the basis of reference [9], which can be used to solve large-scale problems [10].When p = 3, Konno et al. constructed an effective heuristic algorithm by developing the parameter simplex algorithm [11].When p > 3, Shen et al. reduced the original nonconvex programming problem to a series of linear programming problems by using equivalent transformation and linearization techniques to achieve the purpose of solving the linear fraction problem with coefficients [12].Nguyen and Tuy considered a unified monotonic approach to generalized linear fractional programming [13].Benson presented a simplicial branch-and-bound duality-bounds algorithm by applying the Lagrangian duality theory [6].Jiao et al. gave a new interval reduced branch-and-bound algorithm for solving the global problem of linear ratio and denominator outcome space [14].By exploring a well-defined nonuniform mesh, Shen et al. solved an equivalent optimization problem and proposed a complete polynomial time approximation algorithm [15].In the same year, Hu et al. proposed a new branch-and-bound algorithm for solving the low-dimensional linear fractional programming [16].Shen et al. introduced a practicable regional division and reduction algorithm for minimizing the sum of linear fractional functions over a polyhedron [17].Through using a suitable transformation and linearization technique, Zhang and Wang proposed a new branch and bound algorithm with two reducing techniques to solve the generalized linear fractional programming [18].By adopting the exponent transformation technique, Jiao et al. proposed a branch and bound algorithm of three-level linear relaxation to solve the generalized polynomial ratios problem with coefficients [19].Based on the image space where the objective function is easy to deal with in a certain direction, Falk J E et al. transformed the problem into an "image space" by introducing new variables, and then analyzed and solved the linear fractional programming [20].Gao Y et al. transformed the original problem into an equivalent bilinear programming problem, and used the convex envelope and concave envelope of bilinear functions to determine the lower bound of the optimal value of the original problem, and then propose a branch and bound algorithm [21].By dividing the box where the decision variables are located, Ying Ji et al.
proposed a new deterministic global optimization algorithm by relaxing the denominator on each box [22].Furthermore, according to references [23,24], there are other algorithms that can be used to solve the LFP problem.
In this article, a new branch-and-bound algorithm based on the branch of output-space is proposed for globally solving the LFP problem.To do this, an equivalent optimization problem (EOP) is presented.Next, the objective function and constraint functions of the equivalence problem are relaxed using four sets (i.e., I + I , I − I , J + I , J − I ) and the multiplication rules for real number operations.Based on this operation, a linear relaxation programming problem that provides a reliable lower bound for the original problem is constructed.Finally, a new branch-and-bound algorithm for the LFP problem is designed.Compared with the methods mentioned above (e.g., [9][10][11][12][13][14][15]17,18,23,24]), the goal of this research is three-fold.First of all, the lower bound of the subproblem of each node can be achieved easily, solely by solving linear programs.Secondly, the performance of the algorithm is based on the difference between the number of decision variables n and the number p of ratios.Thirdly, the problem in this article is more general than those considered in [14,17,18], since we only require e T i x + f i = 0 and don't need to convert c T i x + d i < 0 to c T i x + d i ≥ 0 for each i.However, the problem solved by our model must ensure that every decision variable is non-negative, which is also a limitation of the problem we study.In the end, the computational results of a problem with a large number of ratio terms are shown below to illustrate the feasibility and validity of the proposed algorithm.
This paper is organized as follows.In Section 2, the LFP problem is changed to the equivalent non-convex programming problem EOP.Section 3 shows how to construct a linear relaxation problem of LFP.In Section 4, we give the branching rules on a hyper-rectangle.In Section 5, an output-space branch and bound algorithm is presented and its convergence is established.Section 6 introduces some existing test examples in the literature, and gives the calculation results and numerical analysis.Finally, the method of this paper is briefly reviewed, and the extension of this method to multi-objective fractional programming is prospected.

The Equivalence Problem of LFP
In order to establish the equivalence problem, we introduce p auxiliary variables and let t i = 1 e T i x+ f i , i = 1, 2, . . ., p.The upper and lower bounds of t i are referred to by t i and t i , respectively.Then, we calculate the following linear programming problems: The hyper-rectangle of t can be denoted as follows: Similarly, for the sub-hyper-rectangle H k ⊆ H that will be used below, the following definitions are given: Finally, the LFP problem can be further translated into the following equivalent optimization problem: Theorem 1.The feasible solution x * is a global optimal solution of the LFP problem if and only if the EOP problem attaches to the global optimal solution (x * , t * ), and for every i = 1, 2, . . .p we have equation Proof of Theorem 1.If x * is a globally optimal solution for the problem LFP, we have ) is the feasible solution and the objective function value f (x * ) of EOP, respectively.Let (x, t) be any feasible solution to problem EOP.We have Using the optimality of x * , Hence, by x * ∈ X and t i = , a global optimal solution (x * , t * ) of problem EOP can be found.On the other hand, problem EOP can also be solved and its optimal solution (x * , t * ) obtained.Let and then we have Let As x * ∈ X, according to the above inequalities, for x * is a global optimal solution of problem LFP.Thus, the LFP problem is equivalent to EOP.

A New Linear Relaxation Technique
In this section, we will show how to construct a linear relaxation programming (LRP) for problem LFP.In the following, for the convenience of expression, denote: Then, based on the above discussion, we have Finally, we obtain a linear relaxation programming problem LRP of problem EOP by loosening the feasible region of the equivalent problem: At the same time, the linear relaxation subproblem LRP k of problem EOP on sub-hyper-rectangle H k ⊆ H is : According to the above, assume that when the algorithm iterates to step k, we only need to solve problem LRP k , whose optimal value v(LRP k ) is a lower bound of the global optimum value v(EOP k )of problem EOP on rectangle H k ⊆ H.The optimal value v(LRP k ) is also an effective lower bound of the global optimum value v(LFP k ) of the original problem Therefore, problem LRP k is solved-its optimal value is obtained, which is a lower bound of the global optimum of problem LFP on rectangle H k .The updated method of the upper bound will be explained in detail in Remark 4.

Branching Process
In order to facilitate the branch of the algorithm, we adopt the idea of dichotomy, and give an adaptive hyper-rectangular partition method, which depends on ω.
the current hyper-rectangle to be divided, the corresponding optimal solution of the problem LFP k is represented by x k and the corresponding optimal solution of the linear relaxation problem LRP k is represented by (x k , t k ), respectively.It is obvious that x k ∈ X and t k ∈ H k .The following forms of dissection will be performed on otherwise, find the first t k j ∈ arg max ω and let Remark 1.When ω = 0, t k is located at the lower left vertex or upper right vertex of the hyper-rectangle H k , we divide the longest edge of the rectangle in a uniform dichotomous way.When ω = 0, the dividing method depends on the position of the t k in the hyper-rectangle H k , and the selected µ edge is as wide as possible and ensures that t k µ is as close to the midpoint of the edge as possible.
Remark 2. The advantage of this method of dividing the hyper-rectangle is that it increases the diversity of hyper-rectangle segmentation, but to some extent, it increases the amount of extra computation.Other rules may have better performance.

Output-Space Branch-and-Bound Algorithm and Its Convergence
To allow a full description of the algorithm, when the algorithm iterates to step k, we make the following representation of the associated notation: H k is the hyper-rectangle to be thinned for the current iteration step; Q is the set of all feasible solutions to LFP; Ω is the remaining sets of hyper-rectangles after pruning; U k is the upper bound of the global optimal value of the LFP problem when the algorithm iterates to the step k; L k is the lower bound of the global optimal value of the LFP problem when the algorithm iterates to the step k; L(H k ) represents the optimal function value of problem LRP k on H k and (x k , t k ) is its corresponding optimal solution.
Using the above, a description of the output-space branch-and-bound algorithm for solving the problem LFP is as follows.
Step 1. Set the tolerance > 0. Construct the initial hyper-rectangle Solve the linear programming problem LRP 0 on super-rectangular H 0 .The corresponding optimal solution and optimal value are recorded as (x 0 , t 0 ) and L(H 0 ), respectively.Then, L 0 = L(H 0 ) is the initial lower bound of the global optimal value of LFP.The initial upper bound is the initial iteration number k = 1, and transfer to Step 2.
Step 2. If U k − L k ≤ , then stop the iteration of the algorithm, output the current global optimal solution x * of the LFP problem and the globally optimal value f (x * ); Otherwise, go to Step 3.
Step 3. The super-rectangle H k , which corresponds to the current lower bound L k , is selected, in Ω, i.e., L k = L(H k ).
Step 4. Using the rectangular branching process in Section 3, H k is divided into two sub-rectangles: and continue.
and go to Step 2.
Remark 3. The branching target of our branch and bound algorithm is p-dimensional output-space, so our algorithm can be called OSBBA.

Remark 4. It can be seen from Step 4 and
Step 5 that the number of elements in Q does not exceed two in each iterative step, and at the same time, only two function values are calculated in Step 5 to update the upper bound.Remark 5.In Step 4, we save the super-rectangle H ki of L(H ki ) < U k into Ω after each branch, which implies the pruning operation of the branching algorithm.Remark 6.The convergence rate of the algorithm OSBBA. is related to the optimal accuracy and the initial hyper-rectangle H 0 .It can be seen from Theorem 5 below that the convergence rate of the algorithm OSBBA is proportional to the size of the accuracy and inversely proportional to the diameter length of the initial hyper-rectangle H 0 .In general, the accuracy is given in advance, and the convergence rate mainly depends on the diameter length of the initial hyper-rectangle H 0 .
According to Theorem 2, we can also know that the super-rectangle of t is thinning gradually and the relaxed feasible region progressively approaches the original feasible region by operation of the algorithm.
Theorem 3. (a) If the algorithm terminates within finite iterations, a globally optimal solution for LFP is found.
(b) If the algorithm generates an infinite sequence in the iterative process, then any accumulation point of the infinite sequence {x k } is a global optimal solution of the problem LFP.
Proof of Theorem 3. (a) If the algorithm is finite, assume it stops at the kth iteration, k > 1.From the termination rule of Step 2, we know that Assuming that the global optimal solution is x * , we know that hence, combining inequalities ( 5) and ( 6), we have and then part (a) has been proven.(b) If the iteration of the algorithm is infinite, and in this process, an infinite sequence {x k } of feasible solutions for the problem LFP is generated by solving the problem LRP k , the sequence of feasible solutions for the corresponding linear relaxation problem is {(x k , t k )}.According to Steps 3-5 of the algorithm, we have Because the series {L k = f (x k , t k )} is nondecreasing and bounded, and {U k = f (x k )} is decreasing and bounded, they are convergent sequences.Taking the limit on both sides of (8), we have Then, L = lim k→∞ f (x k , t k ), U = lim k→∞ f (x k ), and Formula (9) becomes Without loss of generality, assume that the rectangular sequence and H k+1 ∈ H k .In our algorithm, the rectangles are divided continuously into two parts of equal width, then lim k→∞ H k = t * , and in the process, a sequence {t k } of t will be generated, obviously, lim and also generate a sequence {x k } that satisfies lim k→∞ x k = x * , because of the continuity of function f (x) and Formula (10).So, the sequence {x k }, of which any accumulation point x * is a global optimal solution of the LFP problem.
From Theorem 3, we know that the algorithm in this paper is convergent, and then we use Theorems 4 and 5 to show that the convergence rate of our algorithm is related to the size of p.For the detailed proof of Theorem 4, see [25], and other concepts in the theorem are derived from [25].In addition, we encourage readers to understand [25] in detail.
As the sub-hyper-rectangles obtained by our branch method are not necessarily congruent, take where the definition of s is given below.The definition of δ(H l ) is the same as that of Notation 1 in [25], which represents the diameter of hyper-rectangle H l .Therefore, δ(H) represents the maximum diameter of the s hyper-rectangles.In order to connect well with the content of this paper, we adjust the relevant symbols and reinterpret.Theorem 4. Consider the big cube small cube algorithm with a bounding operation which has a rate of convergence of q ≥ 1.Furthermore, assume a feasible super-rectangle H and the constants , C > 0 as before.Moreover, we assume the branching process which splits the selected super-rectangle along each side, i.e., into s = 2 r smaller super-rectangles.Then the worst case number of iterations for the big cube small cube method can be bounded from above by Proof of Theorem 4. The proof method is similar to the Theorem 2 in [25] and is thus omitted.
In Theorem 4, r represents the spatial dimension of the hyper-rectangle to be divided.At the same time, Tables 1 and 2 in [25] show that q = 1 is the worst case, that is, the most times the algorithm needs to subdivide hyper-rectangles during iteration.For the convenience of the discussion, we assume q = 1, and give Theorem 5 to show that the convergence rate of our algorithm is related to the size of p. Theorem 5.For the algorithm OSBBA, it is assumed that for a feasible hyper-rectangle H p , there is a fixed positive number of C p and the accuracy .In addition, we also assume that the branching process will eventually divide the hyper-rectangle into s = 2 p small hyper-rectangles.Then, in the worst case, the number of iterations of the OSBBA algorithm when dividing the hyper-rectangle H p can be expressed by the following formula: We call the convergence rate of the algorithm OSBBA, O(p).
Proof of Theorem 5. We order "r = p", "C = C p ", "q = 1", "z = z p " and "H = H p " in Theorem 4, the proof method is similar to Theorem 4, and the reader can refer to [25].
In addition, for the algorithms in [18,[26][27][28][29], they subdivide the n-dimensional hyper-rectangle H n .Similar to Theorem 4, when they divide the hyper-rectangle H n , the number of iterations in the worst case can also be expressed by the following formula: where "n", "C n ", "q n ", "z n " and "H n " correspond to "r", "C", "q", "z" and "H" in (11).We also record the convergence rate of the algorithm in [18,[26][27][28][29] as O(n).By means of Equations ( 12) and ( 13), when p n, the following conclusions are drawn: (ii): If z p > z n , there must be a positive number N ≥ z p z n p + 1 so that p < z p z n p < N holds, which means that when N n implies that p < z p z n p < N n, pz p nz n also holds, then: Both conclusions (i) and (ii) can show that when p n, the following formula is established: Remark 7. In Formula (12) of Theorem 5, q = 1, while the q n in Formula ( 13) does not specify the size, which means that O(p) is compared with O(n) in the case of slowest convergence, but in the case of p n, there will always be (i) and (ii), which is a clearer indication of O(p) O(n).
It can be seen that the size of O(p) and O(n) is exponential growth, but the size of p in O(p) is generally not more than 10, and p n, so our algorithm OSBBA has an advantage in solving large-scale problems in the case of p ≤ 10 and p n.The experimental analysis of several large-scale random examples below will also be referred to again.

Numerical Examples
Now, we give several examples and a random calculation example to prove the validity of the branch-and-bound algorithm in this paper.
We coded the algorithms in Matlab 2017a and ran the tests in a computer with an Intel(R) Core(TM)i7-4790s processor of 3.20 GHz, 4 GB of RAM memory, under the Microsoft Windows 7 operational system.In solving the LRPs, we use the simplex method in the linprog command in Matlab 2017a.
In Tables 1-9, the symbols of the table header are respectively: x * , the optimal solution of the LFP problem; f (x * ), the optimal value of the objective function; Iter, the number of iterations on problems 1-11; Ave.Iter, the average number of iterations on problems 12-13; , tolerance; Time: the CPU running time of problems 1-11; Ave.Time, the average CPU running time of problems 12-13; p, the number of linear fractions in the objective function; m, the number of linear constraints; n, the dimension of the decision variable; SR, the success rate of the algorithm in calculating problem 12 .When the number of Ave.Time or Ave.Iter shows "-", it means that the algorithm fails to calculate when solving the problem.
As can be seen from Table 1, our algorithm can accurately obtain the global optimal solution of these 11 low-dimensional examples, which shows the effectiveness and feasibility of this algorithm.However, compared with other algorithms in the literature, the effect of this algorithm is relatively poor.This is because the method of constructing the lower bound is simple and easy to operate, and the branching operation is performed on the p-dimensional output-space.At the same time, the algorithm of this paper has no super-rectangular reduction technology, which makes the approximation of solving the low dimensional problem worse.We also note that the number p of ratios in these 11 examples is not larger than the dimension n of the decision variable, and our algorithm requires that p is much smaller than n, which is why our algorithm is not effective in solving these examples.With the continuous updating and progress of computers, the gap between our algorithm and other methods in solving these 11 low-dimensional examples can be bridged, and the needs of society mainly focus on the high-dimensional problems under p n. Therefore, we only use Examples 1-11 to illustrate the effectiveness and feasibility of our algorithm, and the numerical results can also show that the algorithm is convergent.When our algorithm is applied to higher dimensional problems, the effect gradually improves, as can be seen from the numerical results of Examples 12 and 13 in Tables 2-9.
where p is a positive integer , c ij , e ij , a qj are randomly selected on the interval [0,1], and set b q = 1 for all q.All constant terms of denominators and numerators are the same number, which randomly generated in [1,100].This agrees that the random number is generated in [18].First of all, when the dimension is not more than 500, we generate 10 random examples for each group (p, m, n), and use the algorithm OSBBA and the algorithm in [18] to calculate the same example respectively, and then record the average number of iterations and the average CPU running time of these 10 examples in Table 2, respectively.Secondly, when the dimension n is not less than 1000, it is noted that the algorithm in [18] needs to solve 2n linear programming problems when determining the initial hyper-rectangle, and the search space of each linear programming is at least thousands of dimensions, which is very time-consuming.Note that when (p, m, n) = (10,200,500), the CPU time is close to 1200 s.Therefore, in the case where the dimension n is not less than 1000, We generate only five random examples and specify that the algorithm is considered to fail when the calculated time exceeds 1200 s.On the premise of recording the average number of iterations and the average CPU running time, the success rate of the five high-dimensional examples is also added, which is presented in Table 3.First of all, by comparing the lower bound subproblem in [18] with the lower bound subproblem in our algorithm, we can know that the lower bound subproblem of algorithm OSBBA only makes use of the information of the upper and lower bounds of the denominator of p ratios in the process of construction, while in the process of constructing the lower bound, the information of the upper and lower bounds of the decision variables is also used, which requires the calculation of 2n linear programming problems in the initial iterative step.Compared with the method in [18], the algorithm OSBBA only needs to calculate the 2p linear programming problems in the initialization stage, and does not need to calculate the upper and lower bounds of any fractal denominator.It can be seen that when p is much less than n, [18] will spend a lot of time in calculating 2n linear programming problems.The number of branches is often particularly large when the number of iterations is greater than 1, so that a large number of child nodes will be produced on the branch and bound tree, which not only occupies a large amount of computer memory space but also takes a lot of time.However, the performance of our algorithm OBSBA is the opposite.In real life, the size of p is usually not greater than 10, therefore, the number of subproblems that will need to be solved is usually small in the process of branching iteration.Compared with the method in [18], the amount of computer memory occupied is not very large.
Secondly, from the results of Table 2, we can see that the computational performance of [18] in solving small-scale problems is better than that of our algorithm OSBBA.However, it can be clearly seen that when the dimension of the problem is higher than 100, its computational performance gradually weakens.It can also be seen that when the dimension of the problem is above 100, the computational power of the method in [18] is inferior to our algorithm OSBBA.The computational performance of the algorithm OSBBA is closely related to the size of p, and the smaller the p is, the shorter the computing time is.For the algorithm in [18], its computational performance has a very important relationship with the size of n.The larger the n, the more time the computer consumes.It is undeniable that the method in [18] has some advantages in solving small-scale problems.However, in solving large-scale problems, Table 3 shows that the algorithm OSBBA is always superior to the algorithm in [18].Especially when the dimension is more than 500, the success rate of our algorithm to find the global optimal solution in 1200 s is higher than that in [18].This is the advantage of algorithm OSBBA.In addition, for Example 12, we also use the same test method as the [18] to compare the algorithm OSBBA with the internal solver BMIBNB of MATLAB toolbox YALMIP [26], where we only record the the average CPU running time and the success rate of the two algorithms and display them in Tables 4 and 5.As can be seen from Tables 4 and 5, the BMIBNB is more effective than the OSBBA when solving the small-scale problem, but it is sensitive to the size n of the decision variable, especially when n exceeds 100, the CPU time of the computer is suddenly increased.The algorithm OSBBA is less affected by n, but for small-scale problems, the computational performance of the algorithm OSBBA is very sensitive to the number p of the linear fractions.For large-scale problems, Table 5 shows similar results as Table 3.
In order to further illustrate the advantages of the algorithm OSBBA in this paper, a large number of numerical experiments were carried out for Example 13, comparing the algorithm OSBBA with the call to the commercial software package BARON [27].Through the understanding of BARON, we know that the operation of its branches is also carried out in the n-dimensional space.Similar to [18], we can predict that BARON is quite time-consuming as the dimension increases.To simplify the comparison operation, the one of the constants of the numerator and denominator is set to 100 (i.e., d i = f i = 100) in order to successfully run BARON.Next, we give an upper bound x j for each decision variable, and randomly select from the interval [0, 10] together with c ij , e ij , a qj and b q to form a random Example 13.
We set the tolerance of the algorithm OSBBA and BARON to 10 3 for the sake of fairness (This is because the accuracy of the internal setting of the package BARON is 10 3 and we are unable to adjust it).For each group (m, n, p), we randomly generate 10 examples, calculate the same example with the algorithm OSBBA and the commercial software package BARON respectively, and then record the average number of iterations and the average CPU running time of the 10 examples in Tables 6-9.As we can see from Table 6, when n is less than 100, the CPU run time (Ave.Time) and the iteration number (Ave.Iter) of our algorithm are not as good as BARON.In the case of p = 2, 3, n = 100, the CPU average running time (Ave.Time) and the average iteration number (Ave.Iter) of BARON are less than our algorithm OSBBA.In the state of (m, n) = (10, 100) and p = 4, 5, the algorithm OSBBA is better than BARON.In the case of n ≥ 200, if p ≤ 5, the average CPU running time (Ave.Time) of algorithm OSBBA is less than BARON, while at p > 5, the average running time of the algorithm is opposite to that of the former.According to Tables 7-9, we can also conclude that if p < 5, the algorithm OSBBA takes significantly less than BARON.In Tables 8 and 9, in the case of p ≤ 8, if n = 300, 500, the calculation time of algorithm OSBBA is significantly more than BARON, if n = 700, 900, 1000, 2000, 3000, the calculation of BARON takes more time than the algorithm OSBBA.At the same time, some "-" can be seen in Tables 7 and 9, BARON fails in these 10 computations, which indicates that the success rate of BARON in solving high-dimensional problems is close to zero, but our algorithm can still obtain the global optimal solution of the problem within a finite step, and the overall time is no more than 420 s.
In general, when p n, our algorithm showed a good computing performance.In practical application, the size of p generally does not exceed 10.The calculation results in Table 6 show that our algorithm is not as effective as the BARON algorithm in solving small problems, but it can be seen from Tables 6-9 that this algorithm has obvious advantages in solving large-scale and high-dimensional problems.At the same time, compared with BARON, this algorithm can also solve high-dimensional problems.
The method of the nonlinear programming in commercial software package BARON comes from [29].It is a branch and bound reduction method based on the n-dimensional space in which the decision variable x is located, which we can see from the two examples in Section 6 of [29].In Section 5 of [29], many feasible domain reduction techniques, including polyhedral cutting techniques, are also proposed.Although BARON connects many feasible domain reduction techniques when using this method, from the experimental results in this paper, it can be seen that BARON is more effective than our OSBBA method in solving small-scale linear fractional programming problems.BARON branches on a maximum of 2 n nodes, which is exponential, while our algorithm, potentially branches on a max of 2 p nodes, a smaller-sized problem.Even if these feasible domain reductions are still valid in combination with BARON, when a computer runs a reduced program in these feasible domains, it also increases time consumption and space storage to a large extent.For special nonlinear programming-linear fractional programming problems, the proposed global optimization algorithm is to branch hyper-rectangles in p-dimensional space, because p is much less than n, and p is not more than 10, which ensures that the algorithm proposed by us is suitable for solving large-scale problems.As the variables that the algorithm branches are located in different spatial dimensions, BARON branches on the n-dimensional decision space where the decision variable is located, and the branching process of the algorithm OSBBA is carried out on the p-dimensional output-space.It can also be seen from Tables 6-9 that when the number p of ratio items is much smaller than the dimension n of the decision variable, the algorithm OSBBA calculates better than BARON.This is because in the case of higher dimensional problems, in the process of initialization, BARON needs to solve more and higher dimensional subproblems, while the algorithm OSBBA only needs to solve 2p n-dimensional subproblems, which greatly reduces the amount of computation.This is why our algorithm OSBBA branches in p-dimensional space.
In summary, in terms of the demand of real problems, the number p of ratios does not exceed 10 in the linear fractional programming problems required to be solved.At the same time, the size of p is much smaller than the dimension n of the decision variable.In the process of branching, the number of vertices of the rectangle to be divided is 2 p and 2 n , respectively.In the case of p n, the branch of our algorithm OSBBA can always be completed quickly, but the methods in the software package BARON and in [18] will have a lot of branching complexity.Therefore the computation required by the branch search in R p is more economical than that in R n .In the case of p n, our method is more effective in solving large-scale problems than in [18] and the software package BARON.At the same time, it is also noted that when the results of OSBBA and BMIBNB are compared, the latter is sensitive to the size of n, which once again illustrates the characteristics of the algorithm in this paper.

Conclusions
In this paper, a deterministic method is proposed for linear fractional programming problems.It is based on the linear relaxation problem of the positive and negative coefficient of the constructor, and the corresponding branch-and-bound algorithm OSBBA is given.In Section 6, the feasibility and effectiveness of the algorithm for solving linear fractional programming problems are fully illustrated by numerical experiments, and it is also shown that our algorithm OSBBA is more effective than the method in BARON, BMIBNB, and [18] when applied to high-dimensional problems in the case of p n.In recent years, the development of multi-objective programming is becoming increasingly rapid.We can solve the problem of multi-objective linear fractional programming by combining the 1, 2, . . .p, for any feasible solution x of LFP.Then, (x, t) is a feasible solution to problem EOP and the objective function is f (x).According to the optimality of (x * , t * ) and the feasibility of x, we have f x 3 ≥ 0. + 3x 2 + 3x 3 + 50 3x 2 + 3x 3 + 50 + 3x 1 + 4x 2 + 50 4x 1 + 4x 2 + 5x 3 + 50 + x 1 + 2x 2 + 5x 3 + 50 x 1 + 5x 2 + 5x 3 + 50 + x 1 + 2x 2 + 4x 3 + 50 5x 2 + 4x 3 + 50

Table 2 .
The results of random calculations for Example 12. Ave.Iter, the average number of iterations on problems 12-13; Ave.Time, the average CPU running time of problems 12-13; SR, the success rate of the algorithm in calculating problem 12.

Table 3 .
The results of random calculations for Example 12.

Table 4 .
The results of random calculations for Example 12.

Table 5 .
The results of random calculations for Example 12.

Table 6 .
The results of random calculations for Example 13.

Table 7 .
The results of random calculations for example 13.

Table 8 .
The results of random calculations for Example 13.

Table 9 .
The results of random calculation for example 13.