A Simple and Efficient Local Search Algorithm for the Machine Reassignment Problem
Abstract
1. Introduction
2. Related Work
3. Machine Reassignment Problem
4. Our Approach: SMaRT
4.1. General Structure of the Algorithm
- Preprocessing of the data: the aim is to pre-compute useful information to reduce the search space.
- Hill Climbing procedures: Hill Climbing has two steps. The first one, HC1, focuses on overloaded machines. The procedure tries to move the biggest processes from the machines that strongly increase the objective function value. The second step is HC2, which uses as initial solution the feasible solution obtained by HC1. It applies Shift and Swap moves and changes the solution using the first-improvement criteria.
- Simulated Annealing (SA): This approach uses Shift and Swap moves. This procedure includes a strategy to perform dynamic control of the temperature.
4.2. Preprocessing of the Data
4.3. Hill Climbing
Algorithm 1: HC1 algorithm |
- Select the machines loaded most (ordered by total cost) and from them select the processes with the largest size, ordered by the size according to Equation (2) (line 5). The ordered lists of machines are stored in proper data structures that are initialized during the preprocessing considering fixed processes and are updated according to the structure of current solution.
- With the selected processes, make a shift movement, moving them to other machines (excluding the most loaded machines of that iteration), maintaining the feasibility, and looking for the best improvement from all possible reassignments (Line 11).
- Once the SDES shift is applied in the current iteration, the machines are reordered, and a new iteration starts.
- Choose the first machine within the most overloaded in which at least one process can be moved to any of the remaining machines and produce an improvement. When checking the possible moves for each process, choose the move that makes the best improvement among all processes on the same machine.
Algorithm 2: HC2 algorithm |
- Shift of processes: A process is removed from one machine and inserted into another. This movement provides variability because it increases and decreases the number of processes in a machine.
- Swap of processes: Two processes from different machines are removed and assigned to the other machine. This move causes larger changes in the candidate solution in only one step and increases diversification.
- Generate a new neighbor by applying shift to the current solution.
- Generate a new neighbor by applying a swap to the same current solution.
- Selection criteria: The best neighbor generated from both moves is selected when it is better than the current solution. Ties are broken by choosing the swap move, which gives more diversity (Line 3).
4.4. Simulated Annealing (SA)
Algorithm 3: SMaRT algorithm |
- If the difference is negative or zero, that is, it improves or maintains the value of the objective function, the neighbor is immediately accepted (Line 10).
- If the difference is positive, the acceptance of the neighbor is determined using the temperature criteria (Line 14). That is, the probability of acceptance is , where is the difference in the objective function value, and T is the current temperature.
Parameter Control Mechanisms
- Initial temperature: Since SA explores almost the same neighborhood as HC2, the starting point of SA may be a local optimum. For this, a high initial temperature is established, based on a parameter called and considering the value of the objective function concerning the total cost’s lower bound as follows:According to Equation (4), the temperature increases by taking the parameter , and amplifying it or decreasing it depending on how distant the current solution is concerning the cost lower bound and the initial solution. If the value of the objective function is too close to the lower bound, takes a value , where its square value is and decreases due to this factor. With this, the algorithm accepts solutions of worse quality but very close to the current solution, maintaining the exploitation of the nearby zone instead of moving too far from other regions of the search space.On the other hand, as the value of the objective function increases faster than the lower bound, becomes greater than one, so is multiplied several times and the temperature increases significantly, allowing exploration of the search space.It is important to mention that the change in temperature concerning allows the algorithm to adapt dynamically, regardless of the instance. This is because the is a normalized value, and it is not affected by the different magnitudes of the cost values of different instances. For example, consider , and . In this case, . This means the temperature increases and promotes an exploration phase. On the other side, consider the same conditions as before, but . In this case, . For this current solution, the temperature decreases, and the algorithm focuses on an intensification phase.
- Reheat: The algorithm also has a mechanism to reheat when a certain number of iterations without improvement of the solution is reached. The temperature is also set by using Equation (3).
5. Experiments
5.1. ROADEF/EURO Challenge 2012
Instances
5.2. Experiment Design
- Using the parameter values obtained by ParamILS, we compare our scores with the best results obtained for each instance of the final phase and with the results obtained by the first five teams of the competition.
- We execute the improved versions of the algorithms of teams S41 [4] and S38 [7], which showed the best results in the literature in the context of the competition. It is important to remark that we execute these three algorithms in the same machine and using the same 31 seeds and a maximum execution time of 300 s. These algorithms are available online, under open source license. The average of the results obtained with all seeds is used for comparisons.
5.3. Parameter Tuning
(HC1) | 10, 16, 17, 25, 33, 41, 45. |
(HC1) | 10, 16, 17, 25, 33, 41, 45. |
(HC2 and SA) | 500; 700; 1000; 1100; 1200; 1300. |
(SA) | 770,000; 790.000; 810,000; 830,000; 850,000; 880,000. |
Iterations without improvement before reheat (SA) | 15,000; 17,500; 18,900; 19,500; 20,000; 30,000. |
Iterations before cooling (SA) | 4000; 4200; 4400; 4600; 4800. |
(SA) | 0.96; 0.965; 0.97; 0.975; 0.98. |
5.4. Comparison with the Best of ROADEF
5.5. Comparison with the Current Best Algorithms from the Literature: S41 and S38
Statistical Analysis
6. Conclusions
7. Future Work
Author Contributions
Funding
Institutional Review Board Statement
Informed Consent Statement
Data Availability Statement
Conflicts of Interest
References
- Google ROADEF/EURO CHALLENGE. 2011–2012: Machine Reassignment Problem. Artificial Intelligence. 2012, pp. 9–47. Available online: http://challenge.roadef.org/2012/en/ (accessed on 1 June 2025).
- Canales, D.; Rojas-Morales, N.; Riff, M.C. A Survey and a Classification of Recent Approaches to Solve the Google Machine Reassignment Problem. IEEE Access 2020, 8, 88815–88829. [Google Scholar] [CrossRef]
- Gavranović, H.; Buljubašić, M.; Demirović, E. Variable Neighborhood Search for Google Machine Reassignment problem. Electron. Notes Discret. Math. 2012, 39, 209–216. [Google Scholar] [CrossRef]
- Gavranović, H.; Buljubašić, M. An efficient local search with noising strategy for Google Machine Reassignment problem. Ann. Oper. Res. 2014, 242, 19–31. [Google Scholar] [CrossRef]
- Mehta, D.; O’Sullivan, B.; Simonis, H. Comparing Solution Methods for the Machine Reassignment Problem. In Principles and Practice of Constraint Programming, Proceedings of the 18th International Conference, CP 2012, Québec City, QC, Canada, 8–12 October 2012; Springer: Berlin/Heidelberg, Germany, 2012; pp. 782–797. [Google Scholar] [CrossRef]
- CPLEX. IBM ILOG CPLEX Optimization Studio. Available online: http://www-01.ibm.com/software/commerce/optimization/cplex-optimizer/ (accessed on 4 September 2020).
- Malitsky, Y.; Mehta, D.; O’Sullivan, B.; Simonis, H. Tuning Parameters of Large Neighborhood Search for the Machine Reassignment Problem. In Integration of AI and OR Techniques in Constraint Programming for Combinatorial Optimization Problems, Proceedings of the 10th International Conference, CPAIOR 2013, Yorktown Heights, NY, USA, 18–22 May 2013; Springer: Berlin/Heidelberg, Germany, 2013; pp. 176–192. [Google Scholar] [CrossRef]
- Jaśkowski, W.; Szubert, M.; Gawron, P. A hybrid MIP-based large neighborhood search heuristic for solving the machine reassignment problem. Ann. Oper. Res. 2015, 242, 33–62. [Google Scholar] [CrossRef]
- Portal, G.M. An Algorithmic Study of the Machine Reassignment Problem. Bachelor’s Thesis, Universidade Federal do Rio Grande do Sul, Porto Alegre, Brazil, 2012. [Google Scholar]
- Hoffmann, R.; Riff, M.C.; Montero, E.; Rojas, N. Google challenge: A hyperheuristic for the Machine Reassignment problem. In Proceedings of the 2015 IEEE Congress on Evolutionary Computation (CEC), Sendai, Japan, 25–28 May 2015; pp. 846–853. [Google Scholar]
- Masson, R.; Vidal, T.; Michallet, J.; Penna, P.H.V.; Petrucci, V.; Subramanian, A.; Dubedout, H. An iterated local search heuristic for multi-capacity bin packing and machine reassignment problems. Expert Syst. Appl. 2013, 40, 5266–5275. [Google Scholar] [CrossRef]
- Gabay, M.; Zaourar, S. Vector bin packing with heterogeneous bins: Application to the machine reassignment problem. Ann. Oper. Res. 2015, 242, 161–194. [Google Scholar] [CrossRef]
- Turky, A.; Sabar, N.R.; Song, A. Cooperative evolutionary heterogeneous simulated annealing algorithm for google machine reassignment problem. Genet. Program. Evolvable Mach. 2017, 19, 183–210. [Google Scholar] [CrossRef]
- Turky, A.; Sabar, N.R.; Dunstall, S.; Song, A. Hyper-heuristic local search for combinatorial optimisation problems. Knowl.-Based Syst. 2020, 205, 106264. [Google Scholar] [CrossRef]
- Zhang, J.; Yu, H.; Fan, G.; Li, Z.; Xu, J.; Li, J. Handling hierarchy in cloud data centers: A Hyper-Heuristic approach for resource contention and energy-aware Virtual Machine management. Expert Syst. Appl. 2024, 249, 123528. [Google Scholar] [CrossRef]
- Luo, J.Y.; Chen, L.; Chen, W.K.; Yuan, J.H.; Dai, Y.H. A cut-and-solve algorithm for virtual machine consolidation problem. Future Gener. Comput. Syst. 2024, 154, 359–372. [Google Scholar] [CrossRef]
- Durairaj, S.; Sridhar, R. Coherent virtual machine provisioning based on balanced optimization using entropy-based conjectured scheduling in cloud environment. Eng. Appl. Artif. Intell. 2024, 132, 108423. [Google Scholar] [CrossRef]
- Zhang, L.; Zhang, X.; Xu, M.; Shao, L. Massive-Scale Aerial Photo Categorization by Cross-Resolution Visual Perception Enhancement. IEEE Trans. Neural Netw. Learn. Syst. 2022, 33, 4017–4030. [Google Scholar] [CrossRef] [PubMed]
- Liu, W.; Chang, X.; Chen, L.; Phung, D.; Zhang, X.; Yang, Y.; Hauptmann, A.G. Pair-based Uncertainty and Diversity Promoting Early Active Learning for Person Re-identification. ACM Trans. Intell. Syst. Technol. 2020, 11, 21. [Google Scholar] [CrossRef]
- Li, J.; Lin, J. A probability distribution detection based hybrid ensemble QoS prediction approach. Inf. Sci. 2020, 519, 289–305. [Google Scholar] [CrossRef]
- Mejahed, S.; Elshrkawey, M. A multi-objective algorithm for virtual machine placement in cloud environments using a hybrid of particle swarm optimization and flower pollination optimization. PeerJ Comput. Sci. 2022, 8, e834. [Google Scholar] [CrossRef]
- Guérout, T.; Gaoua, Y.; Artigues, C.; Da Costa, G.; Lopez, P.; Monteil, T. Mixed integer linear programming for quality of service optimization in Clouds. Future Gener. Comput. Syst. 2017, 71, 1–17. [Google Scholar] [CrossRef]
- Li, S.; Pan, X. Adaptive management and multi-objective optimization of virtual machine in cloud computing based on particle swarm optimization. EURASIP J. Wirel. Commun. Netw. 2020, 2020, 102. [Google Scholar] [CrossRef]
- Quan, Z.; Wang, Y.; Liu, X.; Ji, Z. Multi-objective evolutionary scheduling based on collaborative virtual workflow model and adaptive rules for flexible production process with operation reworking. Comput. Ind. Eng. 2024, 187, 109848. [Google Scholar] [CrossRef]
- Wang, G.; Ben-Ameur, W.; Ouorou, A. A Lagrange decomposition based branch and bound algorithm for the optimal mapping of cloud virtual machines. Eur. J. Oper. Res. 2019, 276, 28–39. [Google Scholar] [CrossRef]
- Yagiura, M.; Ibaraki, T.; Glover, F. An ejection chain approach for the generalized assignment problem. INFORMS J. Comput. 2004, 16, 133–151. [Google Scholar] [CrossRef]
- Ray, A. There is no APTAS for 2-dimensional vector bin packing: Revisited. Inf. Process. Lett. 2024, 183, 106430. [Google Scholar] [CrossRef]
- Portal, G.M.; Ritt, M.; Borba, L.M.; Buriol, L.S. Simulated annealing for the machine reassignment problem. Ann. Oper. Res. 2015, 242, 93–114. [Google Scholar] [CrossRef]
- Murat Afsar, H.; Artigues, C.; Bourreau, E.; Kedad-Sidhoum, S. Machine reassignment problem: The ROADEF/EURO challenge 2012. Ann. Oper. Res. 2016, 242, 1–17. [Google Scholar] [CrossRef]
- Hutter, F.; Hoos, H.; Leyton-Brown, K.; Stützle, T. ParamILS: An automatic algorithm configuration framework. J. Artif. Intell. Res. 2009, 36, 267–306. [Google Scholar] [CrossRef]
- KhudaBukhsh, A.; Xu, L.; Hoos, H.; Leyton-Brown, K. SATenstein: Automatically building local search SAT solvers from components. In Proceedings of the 21st International Jont Conference on Artifical Intelligence, Pasadena, CA, USA, 11–17 July 2009; pp. 517–524. [Google Scholar]
- Hoos, H.H. Automated Algorithm Configuration and Parameter Tuning. In Autonomous Search; Hamadi, Y., Monfroy, E., Saubion, F., Eds.; Springer: Berlin/Heidelberg, Germany, 2012; pp. 37–71. [Google Scholar] [CrossRef]
- Brandt, F.; Speck, J.; Völker, M. Constraint-based large neighborhood search for machine reassignment. Ann. Oper. Res. 2016, 242, 63–91. [Google Scholar] [CrossRef]
Instance | Best Cost | ||||||
---|---|---|---|---|---|---|---|
A1_1 | 2 | 4 | 79 | 100 | 4 | 1 | 44,306,501 |
A1_2 | 4 | 100 | 980 | 1000 | 4 | 2 | 777,532,896 |
A1_3 | 3 | 100 | 216 | 1000 | 25 | 5 | 583,005,717 |
A1_4 | 3 | 50 | 142 | 1000 | 50 | 50 | 252,728,589 |
A1_5 | 4 | 12 | 981 | 1000 | 4 | 4 | 727,578,309 |
A2_1 | 3 | 100 | 1000 | 1000 | 1 | 1 | 198 |
A2_2 | 12 | 100 | 170 | 1000 | 25 | 5 | 816,523,983 |
A2_3 | 12 | 100 | 129 | 1000 | 25 | 5 | 1,306,868,761 |
A2_4 | 12 | 50 | 180 | 1000 | 25 | 5 | 1,681,353,943 |
A2_5 | 12 | 50 | 153 | 1000 | 25 | 5 | 336,170,182 |
B_1 | 12 | 100 | 2512 | 5000 | 10 | 5 | 3,339,186,879 |
B_2 | 12 | 100 | 2462 | 5000 | 10 | 5 | 1,015,553,800 |
B_3 | 6 | 100 | 15,025 | 20,000 | 10 | 5 | 156,835,787 |
B_4 | 6 | 500 | 1732 | 20,000 | 50 | 5 | 4,677,823,040 |
B_5 | 6 | 100 | 35,082 | 40,000 | 10 | 5 | 923,092,380 |
B_6 | 6 | 200 | 14,680 | 40,000 | 50 | 5 | 9,525,857,752 |
B_7 | 6 | 4000 | 15,050 | 40,000 | 50 | 5 | 14,835,149,752 |
B_8 | 3 | 100 | 45,030 | 50,000 | 10 | 5 | 1,214,458,817 |
B_9 | 3 | 1000 | 4609 | 50,000 | 100 | 5 | 15,885,486,698 |
B_10 | 3 | 5000 | 4896 | 50,000 | 100 | 5 | 18,048,515,118 |
X_1 | 12 | 100 | 2529 | 5000 | 10 | 5 | 3,100,852,728 |
X_2 | 12 | 100 | 2484 | 5000 | 10 | 5 | 1,002,502,119 |
X_3 | 6 | 100 | 14,928 | 20,000 | 10 | 5 | 211,656 |
X_4 | 6 | 500 | 1190 | 20,000 | 50 | 5 | 4,721,629,497 |
X_5 | 6 | 100 | 34,872 | 40,000 | 10 | 5 | 93,823 |
X_6 | 6 | 200 | 14,504 | 40,000 | 50 | 5 | 9,546,941,232 |
X_7 | 6 | 4000 | 15,273 | 40,000 | 50 | 5 | 14,253,273,178 |
X_8 | 3 | 100 | 44,950 | 50,000 | 10 | 5 | 42,674 |
X_9 | 3 | 1000 | 4871 | 50,000 | 100 | 5 | 16,125,612,590 |
X_10 | 3 | 5000 | 4615 | 50,000 | 100 | 5 | 17,816,514,161 |
Instance | 1° | 2° | 3° | 4° | 5° |
---|---|---|---|---|---|
Qualification Phase (Instances A) | 1.73 (J17) | 3.58 (S25) | 5.92 (S38 [5]) | 9.55 (S23 [9]) | 10.97 (S21) |
Final Phase (Instances B y X) | 0.47 (S41 [3]) | 0.62 (S38 [5]) | 1.72 (J12 [8]) | 2.60 (J25 [33]) | 3.91 (S14) |
Inst. | Best ROADEF | Avg. SMaRT | Best SMaRT | Worst SMaRT | |||
---|---|---|---|---|---|---|---|
OF Value | OF Value | Score | OF Value | Score | OF Value | Score | |
B_1 | 3,339,186,879 | 3,331,636,980 | −0.099 | 3,307,834,020 | −0.410 | 3,346,269,668 | 0.093 |
B_2 | 1,015,553,800 | 1,017,433,717 | 0.036 | 1,015,737,594 | 0.004 | 1,023,968,682 | 0.162 |
B_3 | 156,835,787 | 157,745,897 | 0.014 | 156,954,657 | 0.002 | 161,511,011 | 0.074 |
B_4 | 4,677,823,040 | 4,677,839,013 | 0.000 | 4,677,836,082 | 0.000 | 4,677,841,783 | 0.000 |
B_5 | 923,092,380 | 923,571,048 | 0.004 | 923,164,298 | 0.001 | 925,179,715 | 0.017 |
B_6 | 9,525,857,752 | 9,525,875,819 | 0.000 | 9,525,872,113 | 0.000 | 9,525,881,250 | 0.000 |
B_7 | 14,835,149,752 | 14,839,939,397 | 0.013 | 14,837,729,106 | 0.007 | 14,842,870,156 | 0.020 |
B_8 | 1,214,458,817 | 1,214,586,177 | 0.001 | 1,214,524,830 | 0.000 | 1,214,834,914 | 0.003 |
B_9 | 15,885,486,698 | 15,885,494,399 | 0.000 | 15,885,489,036 | 0.000 | 15,885,503,216 | 0.000 |
B_10 | 18,048,515,118 | 18,050,104,654 | 0.004 | 18,049,486,933 | 0.002 | 18,051,600,120 | 0.007 |
X_1 | 3,100,852,728 | 3,044,235,737 | −0.763 | 3,036,116,625 | −0.872 | 3,051,611,633 | −0.663 |
X_2 | 1,002,502,119 | 1,004,505,802 | 0.039 | 1,003,177,420 | 0.013 | 1,011,746,985 | 0.181 |
X_3 | 211,656 | 1,168,585 | 0.016 | 362,736 | 0.002 | 3,059,316 | 0.047 |
X_4 | 4,721,629,497 | 4,721,651,064 | 0.000 | 4,721,644,289 | 0.000 | 4,721,656,075 | 0.000 |
X_5 | 93,823 | 165,875 | 0.001 | 150,492 | 0.000 | 184,078 | 0.001 |
X_6 | 9,546,941,232 | 9,546,959,553 | 0.000 | 9,546,955,937 | 0.000 | 9,546,962,780 | 0.000 |
X_7 | 14,253,273,178 | 14,265,996,983 | 0.034 | 14,255,873,223 | 0.007 | 14,275,325,679 | 0.058 |
X_8 | 42,674 | 77,520 | 0.000 | 68,961 | 0.000 | 83,875 | 0.000 |
X_9 | 16,125,612,590 | 16,125,628,337 | 0.000 | 16,125,623,667 | 0.000 | 16,125,636,321 | 0.000 |
X_10 | 17,816,514,161 | 17,820,028,260 | 0.008 | 17,817,696,217 | 0.003 | 17,823,791,123 | 0.017 |
Total BX | −0.691 | −1.240 | 0.018 |
Inst. | S41 | S38 | J12 | J25 | S14 | Avg. SMaRT | Best SMaRT | Worst SMaRT |
---|---|---|---|---|---|---|---|---|
B_1 | 0.409 | 0.000 | 0.916 | 0.958 | 1.273 | −0.099 | −0.410 | 0.093 |
B_2 | 0.000 | 0.156 | 0.001 | 0.236 | 0.034 | 0.036 | 0.004 | 0.162 |
B_3 | 0.013 | 0.009 | 0.001 | 0.098 | 0.115 | 0.014 | 0.002 | 0.074 |
B_4 | 0.002 | 0.000 | 0.002 | 0.000 | 0.001 | 0.000 | 0.000 | 0.000 |
B_5 | 0.009 | 0.004 | 0.001 | 0.115 | 0.344 | 0.004 | 0.001 | 0.017 |
B_6 | 0.000 | 0.000 | 0.001 | 0.000 | 0.001 | 0.000 | 0.000 | 0.000 |
B_7 | 0.000 | 0.011 | 0.023 | 0.053 | 0.085 | 0.013 | 0.007 | 0.020 |
B_8 | 0.001 | 0.001 | 0.056 | 0.017 | 0.087 | 0.001 | 0.000 | 0.003 |
B_9 | 0.001 | 0.001 | 0.001 | 0.003 | 0.001 | 0.000 | 0.000 | 0.000 |
B_10 | 0.000 | 0.015 | 0.006 | 0.028 | 0.001 | 0.000 | 0.002 | 0.007 |
X_1 | 0.000 | 0.055 | 0.645 | 0.489 | 1.435 | −0.763 | −0.872 | −0.663 |
X_2 | 0.019 | 0.242 | 0.018 | 0.310 | 0.046 | 0.039 | 0.013 | 0.181 |
X_3 | 0.003 | 0.008 | 0.000 | 0.065 | 0.063 | 0.016 | 0.002 | 0.047 |
X_4 | 0.002 | 0.000 | 0.003 | 0.001 | 0.001 | 0.000 | 0.000 | 0.000 |
X_5 | 0.001 | 0.000 | 0.000 | 0.002 | 0.168 | 0.001 | 0.000 | 0.001 |
X_6 | 0.000 | 0.000 | 0.000 | 0.000 | 0.000 | 0.000 | 0.000 | 0.000 |
X_7 | 0.005 | 0.098 | 0.032 | 0.127 | 0.000 | 0.034 | 0.007 | 0.058 |
X_8 | 0.001 | 0.000 | 0.000 | 0.001 | 0.259 | 0.000 | 0.000 | 0.000 |
X_9 | 0.001 | 0.001 | 0.001 | 0.002 | 0.000 | 0.000 | 0.000 | 0.000 |
X_10 | 0.000 | 0.024 | 0.013 | 0.094 | 0.001 | 0.008 | 0.003 | 0.017 |
Total BX | 0.47 | 0.62 | 1.72 | 2.60 | 3.91 | −0.691 | −1.240 | 0.018 |
Instance | S41 | S38 | SMaRT | |||
---|---|---|---|---|---|---|
Average | Best | Average | Best | Average | Best | |
B_1 | 0.165 | −0.409 | −0.025 | −0.036 | −0.099 | −0.410 |
B_2 | 0.000 | −0.001 | 0.133 | 0.127 | 0.036 | 0.004 |
B_3 | 0.010 | 0.004 | 0.008 | 0.007 | 0.014 | 0.002 |
B_4 | 0.002 | 0.001 | 0.000 | 0.000 | 0.000 | 0.000 |
B_5 | 0.003 | 0.001 | 0.002 | 0.002 | 0.004 | 0.001 |
B_6 | 0.000 | 0.000 | 0.000 | 0.000 | 0.000 | 0.000 |
B_7 | 0.000 | 0.000 | 0.011 | 0.010 | 0.013 | 0.007 |
B_8 | 0.000 | 0.000 | 0.000 | 0.000 | 0.001 | 0.000 |
B_9 | 0.001 | 0.000 | 0.001 | 0.001 | 0.000 | 0.000 |
B_10 | 0.000 | 0.000 | 0.014 | 0.005 | 0.004 | 0.002 |
X_1 | −0.319 | −0.804 | 0.025 | 0.019 | −0.763 | −0.872 |
X_2 | 0.011 | 0.001 | 0.228 | 0.225 | 0.039 | 0.013 |
X_3 | 0.002 | 0.001 | 0.006 | 0.006 | 0.016 | 0.002 |
X_4 | 0.002 | 0.002 | 0.000 | 0.000 | 0.000 | 0.000 |
X_5 | 0.000 | 0.000 | 0.000 | 0.000 | 0.001 | 0.000 |
X_6 | 0.000 | 0.000 | 0.000 | 0.000 | 0.000 | 0.000 |
X_7 | 0.000 | 0.000 | 0.092 | 0.061 | 0.034 | 0.007 |
X_8 | 0.001 | 0.001 | 0.000 | 0.000 | 0.000 | 0.000 |
X_9 | 0.001 | 0.001 | 0.001 | 0.001 | 0.000 | 0.000 |
X_10 | 0.000 | −0.001 | 0.020 | 0.015 | 0.008 | 0.003 |
Disclaimer/Publisher’s Note: The statements, opinions and data contained in all publications are solely those of the individual author(s) and contributor(s) and not of MDPI and/or the editor(s). MDPI and/or the editor(s) disclaim responsibility for any injury to people or property resulting from any ideas, methods, instructions or products referred to in the content. |
© 2025 by the authors. Licensee MDPI, Basel, Switzerland. This article is an open access article distributed under the terms and conditions of the Creative Commons Attribution (CC BY) license (https://creativecommons.org/licenses/by/4.0/).
Share and Cite
Canales, D.; Riff, M.-C.; Montero, E. A Simple and Efficient Local Search Algorithm for the Machine Reassignment Problem. Appl. Sci. 2025, 15, 7474. https://doi.org/10.3390/app15137474
Canales D, Riff M-C, Montero E. A Simple and Efficient Local Search Algorithm for the Machine Reassignment Problem. Applied Sciences. 2025; 15(13):7474. https://doi.org/10.3390/app15137474
Chicago/Turabian StyleCanales, Darío, María-Cristina Riff, and Elizabeth Montero. 2025. "A Simple and Efficient Local Search Algorithm for the Machine Reassignment Problem" Applied Sciences 15, no. 13: 7474. https://doi.org/10.3390/app15137474
APA StyleCanales, D., Riff, M.-C., & Montero, E. (2025). A Simple and Efficient Local Search Algorithm for the Machine Reassignment Problem. Applied Sciences, 15(13), 7474. https://doi.org/10.3390/app15137474