Next Article in Journal
Elasto-Plastic Stress Analysis in a Ductile Adhesive & Aluminum Adherends
Previous Article in Journal
Nonlinear Vibration of a Nanobeam on a Pasternak Elastic Foundation Based on Non-Local Euler-Bernoulli Beam Theory
 
 
Font Type:
Arial Georgia Verdana
Font Size:
Aa Aa Aa
Line Spacing:
Column Width:
Background:
Article

Variant of Constants in Subgradient Optimization Method over Planar 3-Index Assignment Problems

by
Sutitar Maneechai
Department of Mathematics and Statistics, Faculty of Science, Prince of Songkla University, Hat Yai, Songkhla 90112, Thailand
Math. Comput. Appl. 2016, 21(1), 4; https://doi.org/10.3390/mca21010004
Submission received: 9 October 2015 / Revised: 26 February 2016 / Accepted: 1 March 2016 / Published: 8 March 2016

Abstract

:
A planar 3-index assignment problem (P3AP) of size n is an NP-complete problem. Its global optimal solution can be determined by a branch and bound algorithm. The efficiency of the algorithm depends on the best lower and upper bound of the problem. The subgradient optimization method, an iterative method, can provide a good lower bound of the problem. This method can be applied to the root node or a leaf of the branch and bound tree. Some conditions used in this method may result in one of those becoming optimal. The formulas used in this method contain some constants that can be evaluated by computational experiments. In this paper, we show a variety of initial step length constants whose values have an effect on the lower bound of the problem. The results show that, for small problem sizes, when n < 20, the most suitable constants are best chosen in the interval [0.1, 1]. Meanwhile, the interval [0.05, 0.1] is the best interval chosen for the larger problem sizes, when n ≥ 20.

1. Introduction

Consider a scheduling machine problem where there are n machines, n tasks, and n time slots provided. Let c i j k be the cost associated with the assignment where the machine i does the task j at the time slot k . The problem is to find an assignment that will minimizes the total cost and satisfies the following three conditions:
(i)
at every fixed time slot, every machine works in parallel;
(ii)
each machine does a different task on a different time slot; and
(iii)
after the last time slot, all machines have completed all tasks.
Let x i j k for i , j , k = 1 , 2 , ... , n be the decision variables where x i j k = 1 if the machine i is assigned to do the task j at the time slot k , otherwise x i j k = 0 . The problem can be represented by the following mathematical programming:
minimize  i = 1 n j = 1 n k = 1 n c i j k x i j k subject to i = 1 n x i j k = 1 , j , k = 1 , 2 , ... , n     j = 1 n x i j k = 1 , i , k = 1 , 2 , ... , n     k = 1 n x i j k = 1 , i , j = 1 , 2 , ... , n     x i j k { 0 , 1 } i , j , k = 1 , 2 , ... , n
This problem is called the planar 3-index assignment problem (P3AP). The problem was shown to be an NP-complete problem by Frieze in 1983 [1]. Tabu search heuristics and several approximation algorithms have been developed to solve the problem. The first approximation was presented by Kumar et al. [2]. The algorithm had a performance guarantee of 1 / 2 . This was improved to 1 e 1 and 0.669 by Gomes et al. [3] and Katz-Rogozhnikov and Sviridenko [4], respectively. Subgradient optimization procedures have been developed in order to create a good lower bound in an exact branch and bound algorithm for solving the problem (see Magos and Miliotis [5]). The idea and general steps of the subgradient optimization methods are presented in Section 2. In Section 3 and Section 4, a modified subgradient optimization method for solving P3AP is presented and its computational results are given, respectively.

2. Subgradient Optimization Methods and Modifications

The subgradient optimization method was first developed in 1985 [6]. The method aimed to solve non-differentiable optimization problems. One such problem is mathematical programming. The problem optimizes a non-differentiable function with some constraints as follows:
Z = Minimize c x Subject to A x b D x d x 0  and integer
where x is an ( n × 1 ) -column matrix, c is a ( 1 × n ) -row matrix, b is an ( m × 1 ) -column matrix, d is an ( r × 1 ) -row matrix, A is an ( m × n ) -coefficient matrix, and D is an ( r × n ) -coefficient matrix. The structure of Problem (2) is suitable for applying Lagrangean relaxation [7] in order to construct an easier solvable problem. We can propose that the constraints A x b will be incorporated into the objective function using the Lagrangean multiplier vector u = ( u i ) where u i 0 for i = 1 , 2 , ... , m . Then, the corresponding Lagrangean relaxation problem can be written as follows:
L ( u ) = minimize c x + u ( A x b ) subject to D x d x 0  and integer
Since u i 0 for all i = 1 , 2 , ... , m , the objective value L ( u ) Z for all u . Assume that L * = max { L ( u ) ,  for all  u } . Then, L * is the best lower bound found by the Lagrangean relaxation method. In order to find L * , the successive values of u i need to be determined. The “subgradient optimization method” is the most useful iterative method to do so.
At the t -th iteration of the method, we suppose that x ( t ) is an optimal solution to Problem (3) with the Lagrangean multiplier vector u ( t ) . The vector A x ( t ) b provides a subgradient directions μ ( t ) of L ( u ) at the point u ( t ) . At the ( t + 1 ) -th iteration, the Lagrangean multiplier vector u ( t   +   1 ) can be determined as follows:
u ( t + 1 ) = u ( t ) + s ( t ) μ ( t )
where s ( t ) is a step length, commonly determined as
s ( t ) = λ ( t ) L ( u * ) L ( u ( t ) ) μ ( t ) 2
where L ( u * ) is the optimal objective value to Problem (3), and λ ( t ) is a step length constant of the method at the t -th iteration. In general, the method will converge the optimal solution if one of the following conditions holds [8]:
(i)
s ( t ) 0 , lim t ( s ( t ) μ ( t ) ) = 0 and t = 1 ( s ( t ) μ ( t ) ) = ;
(ii)
s ( t ) 0 , lim t s ( t ) = 0 , t = 1 s ( t ) = and { μ ( t ) } is bounded for t 1 .
The step length s ( t ) plays a crucial role in any subgradient optimization method. Different choices of step length and target values for the method have been presented [9]. Moreover, the step length constant λ ( t ) definitely plays a crucial role for the value of the step length s ( t ) . Many choices of λ -value have been proposed; some authors have defined 0 < λ ( t ) 1   [7], but most others have used 0 < λ ( t ) 2 [5,8,10]. Held and others [10] used their computational result to report some good rules for the λ -value as follows:
(i)
set λ = 2 for 2 n iterations where n is a measure of the size of the problem; and
(ii)
successively halve both the values of λ and the number of iterations until s ( t ) is sufficiently small.
These rules were applied in the subgradient optimization algorithm for solving classical assignment problems:
minimize  i = 1 n j = 1 n c i j x i j subject to i = 1 n x i j = 1 , j = 1 , ... , n     j = 1 n x i j = 1 , i = 1 , ... , n x i j k 0 i , j = 1 , ... , n
A modified subgradient optimization method for solving the classical assignment problem was presented in 1981 by Bazaraa and Sherali [11]. In this method, a simple subgradient is used to identify the search direction. Another modified subgradient method was proposed by Fumero in 2001 [8]. The main difference between these two methods lies in the employment of the search direction. These two methods, as well as the method proposed by Held and others, were compared for efficiency by Fumero in 2001 [8]. The results showed that the modified method proposed by Fumero provided higher objective values in the initial steps.
In 1981, Fisher [12] supported the rules of identifying the λ -value proposed by Held and others. He confirmed that the λ -value must be between 0 and 2 for the subgradient procedure to converge to optimum. Unfortunately, there is no literature that provides the exact value to the step length constant, λ . However, a common practice is to use a decreasing sequence of λ -values.
Since the objective function value of Problem (3), L ( u * ) , is unknown, most of the proposed applications have adopted a known upper bound ( UB ) in the step length formula, Equation (5). Thus, the formula becomes
s ( t ) = λ ( t ) U B L ( u ( t ) ) μ ( t ) 2
In order to avoid the zigzagging behavior in the Markov nature of the subgradient method and to improve its convergence rate, modified subgradient techniques have been proposed [13,14,15,16]. The techniques contain a suitable combination of the current and previous subgradient called search direction
d ( t ) = α ( t ) μ ( t ) + β ( t ) d ( t 1 )
where α ( t ) and β ( t ) are search direction constants at the t -th iteration. Moreover, in the ( t + 1 )-th iteration, the Lagrangean multiplier vector u ( t + 1 ) is determined as follows:
u ( t + 1 ) = u ( t ) + s ( t ) d ( t )
These subgradient schemes have been analyzed for their convergence properties by Kim and Ahn [17]. It was shown that the method with these schemes has stronger convergence properties than the standard one.

3. Subgradient Optimization Method for the Planar 3-Index Assignment Problem

The structure of P3AP is suitable for applying the Lagrangean relaxation [5]. The relaxation can be formed by incorporating the constraints of type k = 1 n x i j k = 1 into the objective function. Hence, Problem (1) becomes a Lagrangean relaxation problem as follows:
Z ( u ) = minimize  i = 1 n j = 1 n k = 1 n c i j k x i j k + i = 1 n j = 1 n u i j ( k = 1 n u i j k 1 )     subject to i = 1 n x i j k = 1 , j , k = 1 , 2 , ... , n           j = 1 n x i j k = 1 , i , k = 1 , 2 , ... , n             x i j k { 0 , 1 }, i , j , k = 1 , 2 , ... , n
For any vector u = ( u i j ) , Z ( u ) is a lower bound of Problem (1). Then, the best lower bound can be found by solving the following problem: max   Z ( u ) for all possible vectors u . A solution to the problem can be approximated by solving Problem (10) for a sequence of u obtained through a “subgradient optimization method”. In 1980, Burkard and Froehlich [18] reported that the method produced good boundaries for Problem (1), but there were no details provided of the procedure used. Later, in 1994, Magos and Miliotis [5] implemented a subgradient optimization procedure that was modified from a subgradient scheme proposed by Camerini et al. [13]. The procedure employed many choices of the step length constant, λ , but no details of the choices were given.
In this paper, we present variants of the λ -values in the subgradient optimization method at the root node of the tree in the branch and bound algorithm for solving the problem. The procedure has been modified from the one proposed by Magos and Miliotis [5]. The main idea is unchanged, but the differences are: (1) the stopping criteria; (2) the condition rules for decreasing the step length constants; and (3) the step length formula. The stopping criterion and the rule for decreasing the step length constant in the modified procedure are much simpler, as shown in Table 1.
For any fixed index k , Problem (10) becomes a classical assignment problem. It can be solved by the Hungarian method. A solution x i j k for every fixed index k forms a subgradient direction vector, μ = ( μ i j ) where μ i j = k x i j k . Then, a search direction vector at the current iteration t , denoted by d ( t ) , is defined based on the subgradient direction, μ ( t ) , and the previous search direction, d ( t 1 ) . This vector can be formulated as
d i j ( t ) = μ i j ( t ) + β ( t ) d i j ( t 1 )
where
β ( t ) { 1.5 i = 1 n j = 1 n d i j ( t 1 ) μ i j ( t ) || d ( t 1 ) || 2 , if i = 1 n j = 1 n d i j ( t 1 ) μ i j ( t ) < 0   0 , otherwise .
This idea is employed from Camerini et al. [13]. The current step length, s ( t ) , is defined as
s ( t ) = λ UB Z * ( u ( t ) ) || μ ( t ) || 2
where UB is the current upper bound of Problem (1), and Z * ( u ( t ) ) is the objective function value of Problem (10) with the current Langarean multiplier vector u ( t ) . The step length constant value, λ , is discussed in Section 4. Finally, the Lagrangean multiplier vector for the next iteration is updated, as for Formula (9).

4. Computational Results and Conclusions

In this section, we present the computational results from applying the modified subgradient optimization procedure with some variations of the initial step length constant, λ 0 . The procedure is operated on 1500 instants of P3AP of sizes n = 5, 6, 7, 8, 9, 10, 12, 14, 16, 18, 20, 30, 40, 60, 80. Each problem size consists of 100 instants. All cost coefficients c i j k where i ,   j ,   k = 1 ,   2 , ,   n are integers sampled from a uniform distribution between 0 and 100. The dataset can be reached via www.math.psu.ac.th/html/th/component/content/article/75-person_detial/124-sutitar-m.html.
The purpose of the modified subgradient optimization procedure is to generate the best lower bound for Problem (1). The procedure provides different lower bounds according to different λ 0 , even though all problems have the same actual size. The best lower bound found by this procedure is much better than the initial lower bound generated by the admissible transformations, see Burkard [19]. However, in the procedure in Magos and Miliotis’s paper [5], the authors considered that λ 0 = 1.85 at the root node of the branch and bound tree. For other nodes, the author considered two factors effecting the value: the magnitude of the duality gap, π = lower bound upper bound , and the location of the node, denoted by ρ , with respect to the root of the tree. After several tests, their results are presented in Table 2.
In our implementation, we show lower bounds behavior for the different cases of λ 0 . Figure 1 depicts the average of the percentage increase of the lower bound when λ 0 = 1.85, 1.5, 1 and 0.01 are employed. After applying the modified subgradient optimization procedure, the lower bounds of all instants were increased from the initial lower bound generated from an admissible transformation. However, the figure still indicates that, if a problem size is smaller, a higher λ 0 should be employed. On the other hand, for a bigger problem size, a smaller λ 0 should be used.
In most practices, authors set the value of the step length constant between 0 and 2, as is discussed in Section 2. However, our computational results have shown that choosing λ 0 2 causes the minimum objective function value of Problem (10), the sink point in Figure 2 and Figure 3, to be deeper in accordance with the bigger λ 0 . This value is extremely low, even for a small problem size. This especially occurs for a large problem size and is a major problem. Furthermore, the objective function value at the last iteration is much lower than the value at the earlier iterations, as seen in Figure 2. Consequently, it is not necessary to run the procedure until the stopping criteria is met. Moreover, in terms of lower bound behavior, λ 0 < 2 , in our implementation λ 0 = 1.85 , dominates the other cases with λ 0 2 , see Figure 2 and Figure 3. Therefore, it is not necessary to choose λ 0 values greater than 2.
Consider the sequence of the objective function values of the considered instants, an example of which can be seen in Figure 3. The values decrease for some early iterations, then quickly increase for some iterations after hitting the possible minimum value (a sink point in the figure). After some iterations (mostly about 30 iterations), the values faintly increase until the stopping criteria are met. However, the objective function values in some iterations may be lower than some previous iterations, but this does not have an effect on the tendency of the sequence, as is shown in Figure 3.
For every instant of problem sizes n = 6, 8, 10, 12, 14, 16, 18, 20, the highest average percentage of increasing of the lower bound occurs when λ 0 lies in between 0.1 and 1, as is shown in Figure 4 and Table 3. These results support the idea of choosing a λ -value between 0 and 2. However, the average percentage of increase of the lower bound for the instants of the large problem sizes n = 30, 40, 60, and 80, the highest value occurs when λ 0 = 0.075 (see Table 3). Furthermore, our computational results have shown that setting λ 0 between 0.05 and 0.1 costs the lowest number of iterations for the procedure (see Table 4).
Since a P3AP is an NP-complete problem [1], each problem has its own properties and it is not easy to find a global solution for this problem. The only known algorithm for finding the global optimal solution for this problem is a branch and bound algorithm. The efficiency of the algorithm depends on the best lower and upper bound found by the algorithm. The subgradient optimization method is one of the methods used for generating a good lower bound for the problem. The method can be applied to the root node or a leaf of the branch and bound tree. Some conditions used in the method may lead to an optimal solution. The exact value for some constants in the method need computer experiments, and the best choice will be made. For our experiments, the variant values of the initial step length constant λ 0 , the most suitable are best chosen in the interval [0.1, 1] for small problem sizes, when n < 20 , and [0.05, 0.1] for bigger problem sizes, when n 20 .

Conflicts of Interest

The author declares no conflict of interest.

References

  1. Frieze, A.M. Complexity of a 3-dimensional assignment problem. Eur. J. Op. Res. 1983, 13, 161–164. [Google Scholar] [CrossRef]
  2. Kumar, S.; Russell, A.; Sundaram, R. Approximating Latin square extensions. Algorithmica 1999, 24, 128–138. [Google Scholar] [CrossRef]
  3. Gomes, C.; Regis, R.; Shmoys, D. An improved approximation algorithm for the partial Latin square extension problem. Oper. Res. Lett. 2004, 32, 479–484. [Google Scholar] [CrossRef]
  4. Katz-Rogozhnikov, D.; Sviridenko, M. Planar Three-Index Assignment Problem via Dependent Contention Resolution; IBM Research Report; Thomas, J., Ed.; Watson Research Center: Yorktown Height, New York, NY, USA, 2010. [Google Scholar]
  5. Magos, D.; Miliotis, P. An algorithm for the planar three-index assignment problem. Eur. J. Op. Res. 1994, 77, 141–153. [Google Scholar] [CrossRef]
  6. Shor, N. Minimization methods for non-differentiable functions. In Springer Series in Computational Mathematics; Springer-Verlag: Berlin; Heidelberg, Germany, 1985. [Google Scholar]
  7. Sherali, H.D.; Choi, G. Recovery of primal solutions when using subgradient methods to solve Lagrangian duals of linear programs. Oper. Res. Lett. 1996, 19, 105–113. [Google Scholar] [CrossRef]
  8. Fumero, F. A modified subgradient algorithm for Lagrangean relaxation. Comput. Op. Res. 2001, 28, 33–52. [Google Scholar] [CrossRef]
  9. Goffin, J.L. On convergence rate of subgradient optimization methods. Math. Program. 1977, 13, 329–347. [Google Scholar] [CrossRef]
  10. Held, M.; Wolfe, P.; Crowder, H.P. Validation of subgradient optimization. Math. Program. 1974, 6, 62–88. [Google Scholar] [CrossRef]
  11. Bazaraa, M.S.; Sherali, H.D. On the choice of step size in subgradient optimization. Eur. J. Op. Res. 1981, 7, 380–388. [Google Scholar] [CrossRef]
  12. Fischer, M.L. The Lagrangean relaxation method for solving integer programming problems. Manag. Sci. 1981, 72, 1–18. [Google Scholar] [CrossRef]
  13. Camerini, P.; Fratta, L.; Maffioli, F. On improving relaxation methods by modified gradient techniques. Math. Program. Study 1975, 3, 26–34. [Google Scholar]
  14. Kim, S.; Koh, S.; Ahn, H. Two-direction subgradient method for non-differentiable optimization problems. Oper. Res. Lett. 1987, 6, 43–46. [Google Scholar] [CrossRef]
  15. Kim, S.; Koh, S. On Polyak’s improved subgradient method. J. Optim. Theory Appl. 1988, 57, 355–360. [Google Scholar] [CrossRef]
  16. Norkin, V.N. Method of nondifferentiable function minimization with average of generalized gradients. Kibernetika 1980, 6, 86–89. [Google Scholar]
  17. Kim, S.; Ahn, H. Convergence of a generalized subgradient method for non-differentiable convex optimization. Math. Program. 1991, 50, 75–80. [Google Scholar] [CrossRef]
  18. Burkard, R.E.; Froehlich, K. Some remark on 3-dimensinal assignment problems. Methods Op. Res. 1980, 36, 31–36. [Google Scholar]
  19. Burkard, E. Admissible transformations and assignment problems. Vietnam J. Math. 2007, 35, 373–386. [Google Scholar]
Figure 1. The average percentage increase of the lower bound after applying the modified subgradient optimization method with different λ 0 -values.
Figure 1. The average percentage increase of the lower bound after applying the modified subgradient optimization method with different λ 0 -values.
Mca 21 00004 g001
Figure 2. Objective function values, Z * ( u ( t ) ) , at each iteration t , generated by the modified subgradient optimization method for solving an instant of size 8 when λ 0 1.85 .
Figure 2. Objective function values, Z * ( u ( t ) ) , at each iteration t , generated by the modified subgradient optimization method for solving an instant of size 8 when λ 0 1.85 .
Mca 21 00004 g002
Figure 3. Objective function values, Z * ( u ( t ) ) , at each iteration t , generated by the modified subgradient optimization method for solving an instant of size 8 when λ 0 1.85 .
Figure 3. Objective function values, Z * ( u ( t ) ) , at each iteration t , generated by the modified subgradient optimization method for solving an instant of size 8 when λ 0 1.85 .
Mca 21 00004 g003
Figure 4. The average percentage of increase of the lower bound for the P3AP instants of size n = 6, 8, 10, 12, 14, 16, 18, 20 generated by the modified subgradient optimization method with λ 0 = 0.001, 0.001, 0.05, 0.075, 0.1, 0.3, 0.5, 0.75, 1.
Figure 4. The average percentage of increase of the lower bound for the P3AP instants of size n = 6, 8, 10, 12, 14, 16, 18, 20 generated by the modified subgradient optimization method with λ 0 = 0.001, 0.001, 0.05, 0.075, 0.1, 0.3, 0.5, 0.75, 1.
Mca 21 00004 g004
Table 1. The stopping criteria, the condition rules for decreasing the step length constant, and the step length formulas in the two subgradient optimization procedures.
Table 1. The stopping criteria, the condition rules for decreasing the step length constant, and the step length formulas in the two subgradient optimization procedures.
The ProceduresMagos and Miliotis’s Procedure [5]Modified Procedure
The stopping criteria 2 n iterations are allowed from the start of the procedure and subsequently n 2 iterations are gained after an improvement of at least 25% on the lower bound.If there is 0.1% difference between the function values on the current and previous iterations, then the procedure stops.
The condition for decreasing the step length constant n 2 iterations with the initial λ -value additional n 2 iterations for every 1% improvement λ are allowed. If no improvement is made within the iterations allowed, set λ = λ 1.5 and continue the procedure with that λ -value for at least n 2 iterations.Set λ = λ 1.5 if no improvement on any iteration where no improvement occurs.
Step length formula in the iteration t s ( t ) = λ UB Z * ( u ( t ) ) || d ( t ) || 2 s ( t ) = λ UB Z * ( u ( t ) ) || μ ( t ) || 2
Table 2. The initial step length constant λ 0 presented by Magos and Miliotis [5].
Table 2. The initial step length constant λ 0 presented by Magos and Miliotis [5].
π 0.95 0.95 > π 0.7 0.7 > π 0.6 0.6 > π
ρ 10 1.151.351.651.85
10 < ρ 20 0.600.700.800.90
20 < ρ 30 0.500.600.700.80
30 < ρ 0.350.550.650.75
Table 3. The average percentage of increase of the lower bound for the P3AP generated by the modified subgradient optimization method with different λ 0 -values.
Table 3. The average percentage of increase of the lower bound for the P3AP generated by the modified subgradient optimization method with different λ 0 -values.
Problem λ
size0.0010.010.050.0750.10.30.50.751
519.38321.88623.36023.51723.38823.66423.62923.67723.688
624.60427.86429.91730.16930.29230.34830.34230.24930.347
729.38732.80135.05335.31135.42235.51335.49035.50535.521
834.66838.36240.74440.95841.05441.10441.11541.09141.086
937.90741.61643.93944.17144.25344.26344.30944.31344.288
1042.01846.06448.33248.49748.58548.65348.64248.63948.588
1247.46951.48953.50153.51753.66853.68653.67953.68253.665
1451.94155.72857.42157.64657.67657.66757.66857.67357.632
1656.45359.81461.39861.39461.47661.47261.47261.48061.398
1860.62463.76265.06065.10465.10165.10465.11865.08965.087
2063.16266.23067.38367.36667.41467.42067.40867.37167.265
3074.36676.61477.17477.19777.17777.17577.16677.03176.950
4081.06083.72584.04284.05184.04384.02883.89983.81883.573
6093.74795.04395.16095.18095.16995.13394.99994.51693.945
80104.453104.469104.510104.525104.491104.428104.174103.484102.454
Note: Each bold value shows the highest average percentage of the increase of the lower bound for each problem size
Table 4. The average number of iterations in the modified subgradient optimization method for each problem size with different λ 0 -values.
Table 4. The average number of iterations in the modified subgradient optimization method for each problem size with different λ 0 -values.
Problem λ
size0.0010.010.050.0750.10.30.50.7511.251.51.85
5349.762.245.240.242.054.848.249.860.754.455.888.6
6366.480.059.460.760.663.164.368.776.575.673.297.2
7386.091.869.868.968.875.076.079.290.487.383.5105.6
8380.8101.176.175.676.183.287.489.299.996.594.2111.0
9360.2108.478.380.480.684.390.095.4103.299.296.0108.0
10383.5110.380.980.082.090.093.697.7107.4104.2100.6110.5
12371.0120.085.486.589.597.298.1107.2116.7113.2109.2108.4
14378.5127.390.293.997.299.8106.1115.5121.3120.6115.6111.4
16381.5132.293.396.598.5104.3113.2117.8130.0122.5120.8111.3
18364.3136.0101.9100.5101.0108.9115.7121.4133.5131.7125.8116.8
20356.3141.1100.9102.0103.7113.5121.3125.4138.1132.3126.2112.1
30322.2162.5111.8116.5115.9125.7129.7140.5152.9150.1146.7114.4
40296.6172.8121.4124.6125.2139.0144.3150.7168.9163.4156.7119.2
60264.7204.4137.2141.5139.2152.7162.2172.6192.6189.0184.4131.1
80247.8242.3154.0152.2152.9170.6182.8189.3249.0218.0200.2148.1
Note: Each bold value shows the lowest average number of iterations.

Share and Cite

MDPI and ACS Style

Maneechai, S. Variant of Constants in Subgradient Optimization Method over Planar 3-Index Assignment Problems. Math. Comput. Appl. 2016, 21, 4. https://doi.org/10.3390/mca21010004

AMA Style

Maneechai S. Variant of Constants in Subgradient Optimization Method over Planar 3-Index Assignment Problems. Mathematical and Computational Applications. 2016; 21(1):4. https://doi.org/10.3390/mca21010004

Chicago/Turabian Style

Maneechai, Sutitar. 2016. "Variant of Constants in Subgradient Optimization Method over Planar 3-Index Assignment Problems" Mathematical and Computational Applications 21, no. 1: 4. https://doi.org/10.3390/mca21010004

Note that from the first issue of 2016, this journal uses article numbers instead of page numbers. See further details here.

Article Metrics

Back to TopTop