Output-Space Branch-and-Bound Reduction Algorithm for a Class of Linear Multiplicative Programs

: In this paper, a new relaxation bounding method is proposed for a class of linear multiplicative programs. Although the 2 p − 1 variable is introduced in the construction of equivalence problem, the branch process of the algorithm is only carried out in p − dimensional space. In addition, a super-rectangular reduction technique is also given to greatly improve the convergence rate. Furthermore, we construct an output-space branch-and-bound reduction algorithm based on solving a series of linear programming sub-problems, and prove the convergence and computational complexity of the algorithm. Finally, to verify the feasibility and effectiveness of the algorithm, we carried out a series of numerical experiments and analyzed the advantages and disadvantages of the algorithm by numerical results.


Introduction
In this study, we consider the following linear multiplicative programs (LMP): where the feasible domain X = {x ∈ R n |Ax ≤ b} is n-dimensional, nonempty, and bounded; p ≥ 2, A ∈ R m×n , b ∈ R m , c j ∈ R n , d j ∈ R, and c T j x + d j > 0. The linear multiplicative program problem comes from many application areas, for example, financial optimization [1,2], microeconomics [3], robust optimization [4], decision tree optimization [5], multiple-objective decision [6,7], VLSI chip design [8], optimal packing and layout [9], and control problems [10][11][12][13][14]. It is well known that the (LMP) problem does not contain some common properties (convexity or other properties), which is an important challenge and test for solving this kind of problems. In addition, we also note that the (LMP) problem is closely related to the linear maximum multiplicative programming problem (see [14][15][16]). Specifically, the linear maximum multiplicative programming problem can be obtained by changing the min in the objective function of the (LMP) problem to max. Some scholars have shown that the (LMP) problem is NP-hard [17], but the linear maximum multiplicative programming are presented and the characteristics of our algorithm are analyzed. Finally, the method of this paper is briefly reviewed.

Convert (LMP) into an Equivalent Problem (EP)
In this subsection, we show how to convert the problem (LMP) into a non-convex programming problem (EP), and introduce (2p − 1)-dimension vector y to obtain the initial hyper-rectangle Y 0 .
Theorem 1. If x * is the globally optimal solution of the problem (LMP), if and only if (x * , y * ) is the globally optimal solution of the problem (EP), the equations y * j = c T j x * + d j , j = 1, 2, · · · , p and y * p+s = y * p+s−1 y * p−s , s = 1, 2, · · · , p − 1 are also established.
must be valid. Therefore, with x * ∈ X and y * ∈ Y, a globally optimal solution of problem (EP) can be obtained, that is, (x * , y * ).
On the other hand, we assume that (x * , y * ) is a globally optimal solution of the (EP) problem, then y * j = c T j x * + d j , j = 1, 2, · · · , p, y * p+s = y * p+s−1 y * p−s , s = 1, 2, · · · , p − 1. and must also hold. For the problem (LMP) arbitrary feasible solution x, if y j = c T j x + d j , j = 1, 2, · · · , p, y p+s = y p+s−1 y p−s , s = 1, 2, · · · , p − 1, then (x, y) is a feasible solution of the (EP) problem and the objective function value is h(x, y). Through the optimality of (x * , y * ) and the feasibility of x, there is According to the above inequality, we can obtain that x * is a globally optimal solution of the problem (LMP). The conclusion is completely proved.
Combined with (1), we can notice that the objective function of the problem (EP) has such a property, that is, h(x, y) = C T (x T , y T ) T = y 2p−1 = y 2p−2 y 1 = · · · = ∏ p j=1 y j = ∏ p j=1 (c T j x + d j ) = f (x). The above property ensures that h(x, y) in the following sections can be replaced by f (x), that is, h(x, y) = f (x).
Although the constraint y p+s = y p+s−1 y p−s , s = 1, 2, · · · , p − 1 of (EP) is still nonlinear, we can also solve the problem (EP) with a branch-and-bound algorithm based on rectangular subdivisions of the set Y 0 =Ŷ 0 ×Ỹ 0 . Let Ξ = {Y k : k = 1, 2, · · · , K} denote a rectangular partition of Y 0 , i.e., In the process of subdivision of rectangular Y k ∈ Y 0 , we must point out that the rectangleŶ k of the first part of Y k is subdivided by the standard dichotomous method, and thenỸ k of the second half is compressed directly. This compression method is described in detail below. Our rectangular branch-and-bound algorithm systematically reduces the rectangular region containing y * by repeating seven basic steps.

Output-Space Branch-and-Bound Reduction Procedure
Step 0. (Initialization) Set the tolerance > 0. The initial upper bound UB 0 , lower bound LB 0 , best solution x * , and (x * , y * ) are obtained by solving the initial lower bound subproblem over Y 0 .
Step 1. (Termination criteria) If UB k − LB k ≤ or Ξ = ∅, then the algorithm is terminated, and the -globally optimal solution (x * , y * ) and the -globally optimal value h(x * , y * ) of the (EP) problem are output immediately, and the -globally optimal solution x * and the -globally optimal value f (x * ) = h(x * , y * ) of (LMP) are also output immediately. Otherwise, go to Step 2.
Step 2. Select an appropriate Y k from Ξ and set Ξ = Ξ\Y k , Y k satisfies LB k < UB k − and (x * , y * ) is the best feasible solution thus far.
Step 3. (Branching operation) Divide Y k into two rectangles Y k1 and Y k2 .
Step 4. (rectangle − reducing operation) Using the rectangle-reduction technique, the two rectangles generated by Step 3 are refined separately, and the index set of the remaining rectangle after the reduction is represented by Γ, obviously |Γ| ≤ 2.
Step 5. (Bounding operation) Compute the lower bound LB(Y ki )(i = Γ) on the minimum value of h over Y ki ∩ {y ∈ R n+2p−1 : y j = c T j x + d j , y p+s = y p+s−1 y p−s , j = 1, 2, · · · , p, s = 1, 2, · · · , p − 1, Step 6. (U pdating the upper and lower bound) The new feasible solution is used to update the upper bound. Thus far, the least optimal value of all known sub-problems is chosen as the new lower bound.
Obviously, Step 5 is the key to improve the efficiency of the algorithm. In the next subsection, we give a linear relaxation-subproblem of the problem (EP) to provide an efficient lower bound for the optimal value.

Novel Linear Relaxation Approach
An important operation of the branch-and-bound procedure for solving (EP) is to establish and solve a series of lower bound relaxer problems of (EP) on Y k ∈ Ξ = {Y k : k = 1, 2, · · · , K}. We then define the associated sub-problems of (EP) as follows: The main idea of our relaxation method is to linearize the nonlinear constraint of (EP k ) and finally obtain its linear lower bound relaxation problem. Next, we give Theorem 2, which is associated with the proposed linearization-method.
Then, the following conclusion holds: Proof of Theorem 2. (a) Through the linear underestimation and overestimation functions defined by the single variable quadratic function ϕ(z) = z 2 over the interval [z, z], we have which is ϕ l (y) ≤ ϕ(y) ≤ ϕ u (y).
is a convex function defined by variable z over the interval [z, z], the maximum value of function ∆(z) can reach at the point z or z, that is, Similarly, By using (8) and (9), we have max Through Equation (8) and the definition of functions ϕ(z + δw), ϕ l (z + δw), ϕ u (z − δw), and ϕ(z − δw), we can draw the following conclusion: Similarly, through Equation (9) and the definition of functions ϕ(z + δw), ϕ u (z + δw), ϕ l (z − δw), and ϕ(z − δw), we also have Then, by using (10) and (11), we have max At this point, the conclusion is proved.
It is noted that at the right most end of inequalities (10) and (11), the size of positive number δ has an effect on the upper and lower estimates of function ψ(z, w). Let a = z − z, b = w − w, g(δ) = (a+δb) 2 8δ , obviously a > 0, b > 0. According to the derivative function g (δ) = δ 2 b 2 −a 2 8δ 2 of g(δ), we can know that, when g(δ) is defined on the interval (0, +∞), it is a convex function, and it is also easy to know that, if g(δ) gets the minimum value , and the gap between ψ(z, w) and its upper and lower estimation functions reaches the minimum value range, and the upper and lower estimation values are more stable. For any sub-rectangle Y k ⊆ Ξ and each s = 1, 2, · · · , p − 1, there is no loss of generality, with the following definition: Using Theorem 2, for each s = 1, 2, · · · , p − 1, let z = y p−s , w = y p+s−1 , the function ψ s (y p−s , y p+s−1 ), Theorem 3. For any sub-rectangle Y k ⊆ Ξ and y ∈ Y k , let ε = max{y k j − y k j : j = 1, 2, · · · , p}, we have y p+s − ψ k s (y p−s , y p+s−1 ) → 0, s = 1, 2, · · · , p − 1 as ε → 0.
Proof of Theorem 3. According to Theorem 2 and the definition of ψ k s (y p−s , y p+s−1 ), we easily know that 0 ≤ y p+s − ψ k s (y p−s , y p+s−1 ) ≤ (y k p−s − y k p−s + δ k s (y k p+s−1 − y k p+s−1 )) 2 8δ k s = |y k p−s − y k p−s | · |y k p+s−1 − y k p+s−1 | 2 .
Below, by using the above pre-given conclusions, the linear relaxation problem (LRP k ) is obtained by relaxing the nonlinear constraints of the equivalent problem (EP k ), which is expressed as follows: Of course, the relaxation subproblem (LRP 0 ) defined on the rectangle Y 0 is shown as follows: Theorem 3 shows that the feasible domain of the linear relaxation subproblem described above will gradually approximate the feasible domain of the equivalent problem (EP) as the algorithm gradually refines the first partŶ 0 of the hyper-rectangular Y 0 .

Remark 2.
If a linear function is added to the objective function of the problem (LMP), then there is a similar equivalent transformation and proof to Section 2.2, which is because we only make the equivalent transformation of the product term of the objective function. In this section, of course, there is a similar relaxation subproblem, and the rectangular partition method in the next subsection is similar, but then the reduction method of the hyper-rectangle is a little different, and we give it after Section 2.4.

Subdivision and Refinement of Hyper-Rectangle
Branching operation is also indispensable in Branch-and-bound procedure. In this subsection, we give the branch-refinement rule of any According to Equations (1) and (4) and y p+s = y p+s−1 y p−s , s ∈ {1, 2, · · · , p − 1}, the generation of rectangularỸ k mainly depends on the successive multiplication of the coordinate components of the lower left vertex and the upper right vertex ofŶ k . Therefore, we only use the standard dichotomy to segment the former partŶ k of the sub-rectangle Y k , and then refine the latter partỸ k according to Equation (4). The specific operations of Step 3 are as follows: , and thenŶ k is also divided into two sub-rectanglesŶ k1 andŶ k2 . Their forms can be expressed asŶ by the Cartesian product.
(ii) For the segmentation and thinning of hyper-rectangleỸ k , the upper right vertex ofŶ k1 and the lower left vertex ofŶ k2 are used, respectively. In this way, we finally get two hyper-rectangles,Ỹ k1 andỸ k2 . According to y p+s = y p+s−1 y p−s , s ∈ {1, 2, · · · , p − 1}, the Cartesian product forms of hyper-rectanglẽ Y k1 andỸ k2 areỸ Although Y k is a hyper-rectangular space of 2p − 1−dimension, it can be seen from the Branch-refinement method of hyper-rectangle Y k mentioned above that we only branch the rectangleŶ k , and the boundary ofỸ k1 (Ỹ k2 ) can be obtained directly according to the boundary ofŶ k1 (Ŷ k2 ), thus the branching process of the branch-and-bound algorithm is completed in p−dimensional spaceŶ k .

Reduction of the Hyper-Rectangle
In this subsection, we give a reduction technique for hyper-rectangles to delete the sub-rectangle Y k that do not contain the globally optimal solution or to delete a part of the sub-rectangle Y k that do not have a globally optimal solution. In this way, the number of rectangles in the set Ξ will be reduced or the effect of refinement of rectangles in Ξ will be achieved, and then the bounding operation will be accelerated.
Without losing the generality, it is assumed that the current hyper-rectangle to be reduced is , y k p+s ] ∈ Ξ, and the best objective function value obtained by the algorithm, thus far, is UB k . Because y 2p−1 = ∏ p j=1 y j ≤ UB k , for each t = 1, 2, · · · , p, v = 1, 2, · · · , p − 2, define It is easy to know that, if the problem (EP) has a globally optimal solution in the rectangular Y k , there must be a necessary condition, that is, y k 2p−1 ≤ y 2p−1 ≤ γ k . This necessary condition is also used in the following two rectangular reduction theorems.
In view of the characteristics that the super rectangle Y k consists of two partsŶ k andỸ k , we reduce it in two steps. For this reason, we give Theorems 4 and 5, and prove that they have set forth the super-rectangular reduction technique in this section.
, the original problem (EP) has no globally optimal solution on the rectangle Y k ; otherwise, if α k t < y k t , the rectangle Y k1 does not contain the globally optimal solution of the problem Proof of Theorem 4. If there is a t ∈ {1, 2, · · · , p} that satisfies α k t < y k t , there will be then, there is no globally optimal solution of (EP) on Y k . Next, we prove that there is γ k < y 2p−1 for each y ∈ Y k1 . When y ∈ Y k1 , we consider the tth element y t of y, because y t ∈ (α k t , y k t ] ∩Ŷ k t , we have According to the definition of α k t and the above inequality, we also have This means that, for all y ∈ Y k1 , there is γ k < y 2p−1 . Therefore, there is no globally optimal solution for the problem (EP).
To facilitate the description of Theorem 5, we still record the hyper-rectangle reduced by Theorem 4 It can be seen that the second part of Y k does not change, and Theorem 5 is given below to reduceỸ k .
Proof of Theorem 5. First, according to the definition of β k v , we have Second, we prove that, for any y ∈ Y k2 , there is γ k < y 2p−1 . If v = p − 1, obviously, y p+v = y 2p−1 > β k 0 = γ k ; Then, Y k2 does not contain the globally optimal solution of the problem (EP). If v ∈ {1, 2, · · · , p − 2} exists and β k p−v−1 < y k p+v is satisfied, we continue to consider the (p + v)th element Therefore, there is no globally optimal solution for the original problem (EP) on the hyper-rectangular Y k2 .
According to Theorems 4 and 5, we can construct the following reduction techniques to reduce the hyper-rectangle Y k , which makes the compressed hyper-rectangle thinner and removes the part of the hyper-rectangle Y k that does not contain the globally optimal solution, so that the search-space required for the algorithm to solve the problem (EP) is reduced, thus speeding up the convergence of the algorithm. Step In addition, if the objective function of the problem (LMP) contains an additional linear function, then, to make the above reduction method valid, we need to make an adjustment to UB k because it is affected by the additional linear term. Assuming that this additional linear term is e T x + f , then, before the algorithm begins, we need to solve a linear programming problem ξ = min Of course, e is an n-dimensional column vector while f is a constant. Obviously, the adjusted UB k also satisfies the conditions of the above two theorem.

Output-Space Branch-and-Bound Reduction Algorithm
In this subsection, we combine the output-space branch-and-bound reduction procedure with the bounding operation, branching operation, and rectangle-reducing operation, and then construct a new deterministic global optimization algorithm for solving the problem (EP), namely the Output-Space Branch-and-Bound Reduction Algorithm (OSBBRA).
To describe the algorithm smoothly, we explain the relevant symbols of the algorithm iteration to step k as follows: Y k is the hyper-rectangle to be subdivided in the current iteration step; Θ is the set of feasible solutions stored in the current iteration step of the problem (EP); Ξ is the set of sub-rectangles remaining after the pruning step; UB k is the upper bound of the global optimal value of the problem (EP) in the current iteration step; LB k is the lower bound of the globally optimal value of the problem (EP); and LB(Y k ) and (x, y) represent the optimal value and solution of the subproblem (LRP k ) on the rectangle Y k , respectively. In addition, any feasible point x of (LMP) must have (x,ÿ), withÿ k = (ÿ k 1 ,ÿ k 2 , · · · ,ÿ k p ,ÿ k p+1 ,ÿ k p+2 , · · · ,ÿ k 2p−1 ) T ,ÿ k j = c T j x + d j , j = 1, 2, · · · , p, and y k p+s =ÿ k p+s−1ÿ k p−s , s = 1, 2, · · · , p − 1. being a feasible point for (EP). The specific steps of algorithm (OSBBRA) are as follows: Step 0. (Initialization) Set the tolerance > 0. The initial hyper-rectangular Y 0 is constructed by using Equations (2) and (3), whereas solving each feasible solution x of the (LMP) obtained from the linear programming problem (2) corresponds to one feasible solution (x,ÿ) of (EP), and then stores all such feasible solutions of (EP) into the set Θ. Solve the initial subproblem LRP 0 on hyper-rectangular Y 0 . Then, the optimal value and solution corresponding to the initial subproblem are L(Y 0 ) and (x 0 , y 0 ), respectively. Let Θ = Θ ∪ {(x 0 ,ÿ 0 )}. Thus, LB 0 = L(Y 0 ) can be used as the initial lower bound of the globally optimal value of the problem (EP). The initial upper bound is UB 0 = min{h(x, y) : (x, y) ∈ Θ}. The initial best solution to the original problem (EP) is (x * , y * ) = arg UB 0 . If UB 0 − LB 0 ≤ , then stop, and the -globally optimal solution of the problem (EP) is (x * , y * ). Otherwise, set Ξ = {Y 0 }, H = ∅, Θ = ∅, the iteration number k = 1, and go to Step 2.
Step 1. (Termination criteria) If UB k − LB k ≤ or Ξ = ∅, then the algorithm is terminated, and the -globally optimal solution (x * , y * ) and the -globally optimal value h(x * , y * ) of the (EP) problem are output immediately, and the -globally optimal solution x * and the -globally optimal value f (x * ) = h(x * , y * ) of (LMP) are also output immediately. Otherwise, go to Step 2.
Step 2. According to LB k = LB(Y k ), select the sub-rectangle Y k from the set Ξ and set Ξ = Ξ\Y k , and then go to Step 3.
Step 3. (Branching operation) By using the subdivision and refinement methods in Section 2.3, Y k is divided into two sub-rectangles: Y k1 and Y k2 that satisfy Y k1 ∩ Y k2 = ∅. Then, go to Step 4.
Step 4. (rectangle − reducing operation) Through the reduction in Section 2.4, we compress the two sub-rectangles Y k1 and Y k2 obtained in the previous iteration, and the index set of the remaining sub-rectangles after compression is expressed as Γ. Obviously, |Γ| ≤ 2. Step . If H = ∅ and Ξ = ∅, return to Step 2. Else, if H = ∅ and Ξ = ∅, return to Step 1. Else, set Ξ = Ξ ∪ H.

Remark 3. In
Step 5, we save the super-rectangle Y ki of LB(Y ki ) < UB k − into Ξ after each compression, which implies the pruning operation of the algorithm. Remark 5. The branch search space of our OSBBRA algorithm is p-dimensional. When p is much smaller than the dimension n of decision variables, the convergence rate of the algorithm is relatively faster than that of n-dimensional decision space search.

Analysis of the Computational Complexity of the Algorithm
In this subsection, we deduce the maximum number of iterations of the proposed algorithm by analyzing the computational complexity of the algorithm. For this reason, for the convenience of narration, we first define the longest edge of the first part of the rectangle Y k =Ŷ k ×Ỹ k ⊆ Y 0 , that is, the longest edge ofŶ using (Ŷ k ) = max{y k j − y k j : j = 1, 2, · · · , p}.
Lemma 1. For the given convergence tolerance ≥ 0, if there is a rectangle Y k =Ŷ k ×Ỹ k satisfying (Ŷ k ) ≤ ς when the algorithm is running to the kth cycle, then we have where LB(Y k ) is the optimal value of the problem (LRP k ) and UB denotes the current best upper bound of the equivalent problem (EP).
Proof of Lemma 1. If (x k , y k ) is assumed to be the optimal solution of the linear relaxation problem (LRP k ), obviously y k j = c T j x k + d j , j = 1, 2, · · · , p, in addition, let y k p = y k p , y k p+s = y k p+s−1 y k p−s , s = 1, 2, · · · , p − 1, and y k = (y k 1 , y k 2 , · · · , y p j , y k p+1 , y k p+2 , · · · , y k 2p−1 ) T , then (x k , y k ) is a feasible solution of the equivalent problem (EP k ). By using the definitions of LB(Y k ) and UB, we have Therefore, from Equations (1) and (18)- (22), there is the following Then, by using (17), we have ( 1 2 |y k p−s − y k p−s | · |y k p+s−1 − y k p+s−1 |), (|y k p−s − y k p−s | · |y k p+s−1 − y k p+s−1 |) + (|y k p−1 − y k p−1 | · |y k p − y k p |)], Thus, according to the above inequality and combined with (Ŷ k ) ≤ ς , we can obtain that Finally, the proof of Lemma 1 is completed.
On the premise of Lemma 1, if the (Ŷ k ) ≤ ς is satisfied, then the sub-rectangular Y k can be deleted. Thus, according to Step 5 of the algorithm, it can be appreciated that when each sub-rectangular Y k obtained by the refinement of Y 0 satisfies (Ŷ k ) ≤ ς , the algorithm terminates the iteration. Therefore, we can derive the maximum number of iterations of the algorithm through Lemma 1. Theorem 6 gives a specific process.

Theorem 6.
For the given convergence tolerance ≥ 0, the maximum number of iterations required by the algorithm OSBBRA to obtain the −globally optimal solution of the problem (LMP) is where ς and are given by (21) and (22), respectively. In addition, the definition ofŶ 0 = p ∏ j=1Ŷ 0 j withŶ 0 j = [y 0 j , y 0 j ] is given by (2).

Proof of Theorem 6. It is assumed that
[y j , y j ] is the rectangle that the algorithm is selected from Ξ when it is cycled to a certain Step 3. It is supposed that after k j iterations, there is a subintervalŶ Let us consider the branching process of Step 4, and then there is By combining (23) with (24), we have That is, Then, after K 1 = p ∑ j=1 k j iterations, the algorithm will generate at most K 1 + 1 rectangles, denoted by Y 1 , Y 2 , · · · , Y K 1 +1 , and they must all satisfy where (Ŷ K 1 ) = (Ŷ K 1 +1 ) = max{y k j j − y k j j : j = 1, 2, · · · , p} and Now, put the K 1 + 1 rectangles into the set Ξ K 1 +1 , that is, and, according to (23), we have Here, we let = (Ŷ K 1 ) = (Ŷ K 1 +1 ) in order to facilitate the smooth description of the following. Combined with (27), we can see that is obvious. Then, Y K 1 and Y K 1 +1 will be thrown out of the set Ξ K 1 +1 after using Lemma 1 and Step 5 of the algorithm, because there is no globally optimal solution of the problem (EP) in Y K 1 and Y K 1 +1 . Furthermore, the remaining rectangles will be placed in the set Ξ K 1 , where Of course, the new set Ξ K 1 will continue to be considered. Next, let us focus on Y K 1 −1 . According to (25) and combined with the branch rule of Section 2.3, Y K 1 −1 will be immediately divided into two sub-rectangles Y K 1 −1,1 and Y K 1 −1,2 satisfyinĝ and (Ŷ K 1 −1 ) = 2 (Ŷ K 1 −1,1 ) = 2 (Ŷ K 1 −1,2 ) = 2 .
Therefore, Y K 1 −1 is thrown out of the set Ξ K 1 by using (28)-(30), and we can know that the algorithm iterates once again. At the same time, the remaining rectangles are put into the set Ξ K 1 −1 , that is, Of course, Y K 1 −2 will also be immediately divided into Y K 1 −2,1 and Y K 1 −2,2 satisfyinĝ and Then, both Y K 1 −2,1 and Y K 1 −2,2 must be divided again for once to satisfy (28); that is, for Y K 1 −2 , the algorithm must iterate 2 2 − 1 = 3 times for Y K 1 −2 to be thrown out of the set Ξ K 1 −1 . Then, put the remaining rectangles in the set Ξ K 1 −2 , that is, Similar to (31) and (32), for a rectangular Y t (t = 1, 2, · · · , K 1 − 1), we also havê According to (28) and (33), the algorithm must iterate 2 K 1 −t − 1 at most before Y t is thrown out of its corresponding set Ξ t+1 .
Then, put the remaining rectangles in the set Ξ t , that is, Therefore, when t is taken from K 1 − 1 to 1, the algorithm iterates times at most. In addition, according to (26) and (34), we have then the algorithm will stop running, using Step 5 of the algorithm.

Numerical Examples
In this section, we present many random experiments of different scales through the proposed algorithm. We also provide the performance comparison results compared with the previous methods to solve the problem (LMP).
The code of our algorithm was compiled on Matlab (2016a). All calculation processes were carried out on personal PCs with Intel(R) Core(TM)i5-4210M 2.60 GHz power processor 4 GB memory and the operating system used is Microsoft Windows 7. In the process of numerical experiment, the linprog solver of MATLAB(2016a) was used to solve all linear programming problems, while the quadratic convex programming problem in [27] was solved by quadprog solver in MATLAB(2016a). We use the following notation:
From the literature [23], we know that Example 3 can be transformed into the following forms: Example 4 [26,42] min Example 5 [26,42] min( The objective function of Example 5 can be transformed into 2( Example 6 [26,42] min( The objective function of the Example 6 can be transformed into 2( As shown in Table 1, the algorithm OSBBRA can accurately obtain the globally optimal solution of these eight low-dimensional examples, which shows that the algorithm is effective and feasible. We can see that the algorithm OSBBRA only needs one iteration in solving Example 2 and takes the least time compared with other algorithms. For Example 3, the time and the number of iterations are the most, but only five iterations are required. Examples 1 and 4 need only two iterations to obtain the solution, and the algorithm in this paper only needs very little time to solve them. In the process of solving Example 1, the running speed of this algorithm is much faster than that in Refs. [23,25]. For Examples 5 and 6, because of the special structure of the case, OSBBRA uses more iterations and time to solve such problems than other algorithms, but it can also get the globally optimal solution in less than 0.5 s, which indicates that our relaxation bound method can be further generalized. Examples 7 and 8 are two examples constructed by us, and the results of this algorithm were compared with those of the software package BARON [43]. It can be seen that the calculation results of this algorithm are the same as those of BARON, but the running time of CPU used in our algorithm is less than that of BARON, especially the number of iterations used by BARON in solving Example 8 is much higher than that of algorithm OSBBRA.  [26] (1.3148, 0.1396, 0.0, 0.4233) 0.8902 1 0.0601 10 −6 [39] (1.3148, 0.1396, 0.0000, 0.4233) 0.890190 3 0.047 0.05 [42] ( The above eight small-scale examples only illustrate the validity and feasibility of the algorithm OSBBRA, but we cannot know the other performance of the algorithm in solving the problem (LMP). Therefore, in the next subsection, we describe the other features of the algorithm by performing a series of random tests. The experimental results of random tests are presented in Tables 2-9, and the information disclosed, in these eight tables, is analyzed to illustrate the performance and applicable conditions of the algorithm.

Testing of Random Problems
To test the other features of the algorithm, we used three random problem generation schemes to generate random (LMP): ∑ n i=1 A si x i ≤ b s , s = 1, 2, · · · , m, x i ≥ 0, i = 1, 2, · · · , n.
In (LMP1) and (LMP2), A si is randomly generated in the interval [-1,1], and the value of the right b s is generated by ∑ n i=1 A si + 2π, where π is randomly generated in the interval [0,1]. This is consistent with the methods in Refs. [19,41].
The A si , b s and c j of (LMP3) were randomly generated in the interval [0, 100]. This agrees that the random number is generated in Ref. [19].
In addition, for all problems, we solves 10 different random instances for each size, and give the statistical information of the results. In addition, the tolerance was set to 10 −6 for all random problems.

Remark 8.
Each random example is based on the size of (p, m, n) a set of random coefficients that randomly generate the problem in a given interval, and then solved by the algorithms of OSBBRA, BARON, Reference [27], Reference [38], Reference [19], Reference [41] respectively, to obtain the corresponding calculation results.

Testing of Random Problem (LMP1)
For the problem (LMP1), we compared the algorithm OSBBRA with the algorithm in Ref. [27] and the Primal algorithm in Ref. [38], respectively, and used the optimal value obtained by commercial software package BARON as a standard to evaluate the quality of the optimal solution obtained by these three algorithms. For each group (p, m, n), 10 groups of examples were randomly generated to obtain the average value of the calculated results. For the measurement of the quality of the optimal solution, we used the following formula: where x * is the final optimal solution of the three algorithms and f (x * ) is the optimal value corresponding to the three algorithms. To see the optimal value quality clearly, we used Optimum.ratio × 10 5 , and the corresponding calculation results are listed in Table 2.
As can be seen in Table 2, in terms of the quality of the obtained optimal solution, the three algorithms are arranged in the order of optimal to lowest order: OSBBRA, Ref. [38], and Ref. [27]. This means that the optimal value obtained by our algorithm is the most accurate. For the CPU running time of the algorithm, the following cases are discussed: (i) For (p, m, n) = (2, 20, 200), (2, 30, 300), (2,40,400), the time spent is arranged as OSBBRA, Ref. [38], Ref. [27] in order from less to more. Furthermore, the time occupied by OSBBRA is the least in the three algorithms.
(ii) In the case of (p, m, n) = (2, 10, 100) and p = 3, 4, the time occupied by OSBBRA is the most in the three algorithms, followed by the algorithm in Ref. [27], and the time of the algorithm in the Ref. [38] is the least.
It can be seen that, in Case (ii), our algorithm OSBBRA takes the most time, but, in Case (i), OSBBRA takes less time than the other two algorithms. This is because, in the first case, p n can better reflect the advantages of our algorithm, which can also be seen in the higher scale experimental results in Table 3. This feature of algorithm OSBBRA is also reflected in Tables 4 and 5. Of course, the data in Tables 8 and 9 also imply this reason, which we mention again below.  (p, m, n) Optimum Time

OSBBRA
Ref. [27] Ref. [38] OSBBRA Ref. [27] Ref. [ For the problem (LMP1), we also performed higher-dimensional numerical experiments and record the results in Table 3. In the high dimensional example, we did not use the software package BARON to calculate, because BARON takes a long time to solve the higher-dimensional (LMP1) problem, and, in Table 2, we only use the results of BARON to evaluate the optimal worth quality of the three algorithms. In Table 3, the optimal quality of our algorithm is still the best, followed by Refs. [27,38]. Especially when solving the 4000-dimensional problem, the algorithm in Ref. [27] cannot find the optimal solution in 2400 s, and, when p = 6, 7, the algorithm in Ref. [38] also fails to solve the optimal solution of the problem within 2400 s. In addition, the results in Tables 2 and 3 also show that the computing power of the algorithm in Ref. [38] is better than that in Ref. [27]. The results in Table 3 show that, when (p, m, n) = (5, 50, 1000), the OSBBRA takes less time than the other two algorithms, whereas, in the case of (p, m, n) = (6, 50, 1000), (7, 50, 1000), the OSBBRA takes less time only in the case of other larger-scale problems in [27]. In addition, although the time spent on these three algorithms increases with the size of the problem, OSBBRA always takes less than 2400 s. This shows that our algorithm is more suitable to solve large-scale optimization problems (LMP) than other algorithms under certain circumstances.
The phenomenon reflected in the results of Tables 2 and 3 is mainly due to the characteristics of each algorithm itself. Through the understanding of Ref. [27], each iteration of its algorithm to solve the problem (LMP) requires solving a quadratic programming problem, which does take less time to solve small-scale problems in a certain range than the algorithm OSBBRA, but it takes longer than OSBBRA to solve some large-scale problems. Although the original algorithm in Ref. [38] needs to solve only two linear programming problems in each iteration, and indeed has less time to solve small-scale problems than the other two algorithms, it is a tangent plane algorithm, thus increasing the number of constraints for each iteration. As the scale of the problem increases to a certain extent, its advantages will disappear because the increased constraints will gradually increase the storage space of the computer as it progresses and eventually slow down the speed of the computer. At the same time, the algorithms in both Ref. [27,38] need to store a large number of vertices, which will also affect the running time of the computer. The algorithm OSBBRA needs to store the hyper-rectangle whose length is 2p − 1, and at most two hyper-rectangles are added in each iterative step. Through the pruning operation and rectangle reduction technology of the branch-and-bound algorithm, the hyper-rectangle without the globally optimal solution will be reduced, so that the computer storage space will be saved, and the influence of the storage space on the performance of the algorithm will be reduced as much as possible. Therefore, the computational performance of the algorithm OSBBRA is affected by the length 2p − 1 of the hyper-rectangle.
Next, we performed additional numerical experiments on the problem (LMP1) with OSBBRA and recorded the results in Tables 4 and 5. The main purpose of Tables 4 and 5 is to explore the case that the number of computer-stored consumption-matrixes varies by the size of (p, m, n) in the search for optimal solutions. Meanwhile, the average consumption time and the number of iterations in this process are also recorded in the table.  The data of two groups of numerical experiments are separately calculated by the algorithm OSBBRA, which is used to observe how the size of p and n affects the performance (Ave.Time and Ave.Iter) of the algorithm.The first group is to fix p to 2, 3, 4, 5, 6, and 7 in turn, take n = 2m, take m as from 10 to 100 by steps of 10, and the calculated results are shown in Table 4. The results in Table 4 show that, when p is fixed, the average CPU running time Ave.Time increases as the dimension n of the decision variable becomes larger, and the average maximum number of nodes Ave.Node and the average number of iterations Ave.Iter of the algorithm stored in the branching and delimiting tree either increases or decreases. For fixed (m, n), as p increases, the size of the Ave.Time, Ave.Iter, and Ave.Node are increased; in particular, when p transitions from 6 to 7, the rise speed of these three increases sharply, which indicates that the effect of the size of p on the performance of the algorithm is sharp. The results of Tables 2-4 show that p and n have an effect on the algorithm calculation. In more extreme cases, we did another set of experiments. As can be seen from the first four rows of data in Table 5, when fixed (m, n) = (10, 2) and p in order of 10, 20, 30, and 40, t Ave.Time, Ave.Iter, and Ave.Node are increasing and increasing rapidly, at which point p is larger than n. From the last six rows of data in Table 5, we can also see that when p is much less than n, the algorithm can obtain the globally optimal solution of the problem (LMP) in a short time, and the performance of the algorithm is very sensitive to the size of p, which is mainly because our branch operation is to branch the p-dimensional spaceŶ k (it is the same as the definition in Section 2.3).

Testing of Random Problem (LMP2)
Through the previous experiments, we know the computational effect of OSBBRA, which is influenced by (p, m, n). Next, we conducted numerical experiments on the special scheme (LMP2), and we fixed p = 2 to observe the effect of (m, n) on the algorithm. For the random generation scheme (LMP2), the algorithm OSBBRA was compared with the calculated results in Ref. [19,41], and the related data are recorded in Table 6. According to the method in Ref. [19], the normalized CPU running time and the number of iterations of the problem (LMP2) with respect to the 10 × 20 were obtained using the formula Time(Iter) on m × n problem Time (Iter) on 10 × 20 problem , respectively, and the data are recorded in Table 7. Table 6. Computational results on (LMP2) (p = 2) and comparison with results reported in [19] and [41].

LMP2
Ref. [19] Ref. [41] OSBBRA First, the results of Table 6 show that the stability of our algorithm is not as good as the other two. Table 7 shows that our algorithm has the best performance in the average case in terms of time, but, in terms of the number of iterations, the performance of our algorithm OSBBRA in the average case is slightly worse than that of the other two algorithms. When (m, n) = (110, 199), the matrix size of the solved problem is 109 times larger than that of the basic problem, and the time used by the algorithm OSBBRA is only 14 times larger than that of the basic problem. Whereas the algorithm in Ref. [19] used 162 times more time than the basic problem. In the case of (m, n) = (100, 100), we solved the problem with only six times the time of the fundamental problem, which is 50 times larger than the fundamental problem in terms of the size of the constraint matrix, whereas the algorithm of Ref. [41] takes 22 times the time, and the algorithm of Ref. [41] takes 54 times the time.

(m, n) Avg(Std)Time Avg(Std)Iter Avg(Std)Time Avg(Std)Iter Avg(Std)Time Avg(Std)Iter
To sum up, the algorithm OSBBRA has less computational time increase than the algorithms in Ref. [19,41], in this particular case of fixed p = 2. In the next subsection, we take the case of p = 2 as the basic problem to test the the growth of computing time requirements of algorithm OSBBRA compared to the Ref. [19].

Testing of Random Problem (LMP3)
The results in Section 4.2.2 show that, in the case of p = 2, our algorithm OSBBRA takes far less time to solve a relatively large-scale problem than the algorithm in Ref. [19], but less stable. In this subsection, we use a stochastic scheme (LMP3) with a wider range of values of random coefficients to verify the growth of computing time requirements (measured by r p , and the definition of r p is given below) of our algorithm based on the premise of p = 2. In addition, To better contrast with Ref. [19] and reduce the problem of unnecessary computation, we simply extracted the data that can be compared in Ref. [19] and used our algorithm to calculate, and record the experimental results in Table 8, while recording the values of r p in Table 9. For p = 2, 3, 4 and 5, r p was obtained by calculating the formula r p = AvgTime f or p = i AvgTime f or p = 2 . values are diverse, and the stability of our algorithm is slightly poor; it is also acceptable to produce such a result. Furthermore, the results in Table 9 show that the computational requirements of the algorithm OSBBRA increase more rapidly than those of [19] only in (p, m, n) = (4, 100, 100), (4, 120, 120), (4,200,200), and (5, 100, 100). When n is relatively large compared to p, the value of r p is relatively small, which is because our algorithm has a distinct advantage in solving this case. The results of Tables 4 and 5 also show that, in the case of p n, the growth of computing time requirements of the algorithm grows slowly, which also implies in another aspect the reason the value of r p in Table 9 is relatively small, which also means that our algorithm has obvious advantages in solving this case.
From the experimental results in Sections 4.2.1-4.2.3, we can summarize that our algorithm compared with other algorithms is characterized by the high accuracy of the calculated optimal value, slightly poor stability, and the computational effect of solving large-scale problems with high dimensions in the case of p n is more advantageous than other algorithms. In fact, in practical problems, the numerical size of p in problem (LMP) is generally not more than 10, and the dimension n of decision variable is much larger than p. In the process of branching, the number of vertices of the divided p−dimensional rectangle is 2 p , which is very small compared with the n−dimensional hyper-rectangle with the partition number of 2 n , which is the main reason our algorithm performs better than those in Ref. [27,38,19,41] in solving this kind of large-scale problems. We can also see from Theorem 5 in Ref. [44] that p and n affect the convergence rate of the output space algorithm.

Conclusions
In this paper, an output-space branch-and-bound reduction algorithm (OSBBRA) is proposed to solve the problem (LMP). Based on a new bilinear function relaxation technique, the linear relaxation problem of the equivalent problem (EP) is constructed, and other related parts of the algorithm OSBBRA (Bounding operation, Branching operation and rectangle-reducing operation) are given. In Section 4, the feasibility, effectiveness, and other performance metrics of the algorithm are fully illustrated by a large number of numerical experiments, and it is pointed out that the algorithm is more effective in solving high-dimensional problems under the condition of p n. The method in this paper can also be directly extended and used to solve the linear maximum multiplicative programming problem. In a broader sense, it can also be indirectly promoted, and we will also consider this issue in future academic research.

Acknowledgments:
The authors are grateful to the responsible editor and the anonymous references for their valuable comments and suggestions, which have greatly improved the earlier version of this paper.

Conflicts of Interest:
The authors declare no conflict of interest.

Abbreviations
The following abbreviations are used in this manuscript: LMP linear multiplicative programming EP equivalent nonlinear programming LRP linear relaxation programming T transpose of a vector or matrix