Task Offloading Strategy of Multi-Objective Optimization Algorithm Based on Particle Swarm Optimization in Edge Computing
Abstract
1. Introduction
- This study proposes a novel multi-objective optimization model for DNN task offloading in an end–edge–cloud collaborative framework. This model simultaneously minimizes execution time, energy consumption, and cloud leasing cost, and innovatively incorporates time constraints as a hard indicator in the fitness calculation.
- This study proposes designing an enhanced MOPSO algorithm to solve the proposed model. Key improvements include a hybrid encoding scheme to seamlessly represent complex offloading decisions, a dynamic temperature regulation strategy, and an adaptive restart mechanism. These enhancements effectively prevent premature convergence, a common drawback of traditional PSO, as evidenced by a 42.6% higher Global Search Capability Index (GSCI).
- This study proposes conducting extensive simulations on six mainstream DNN models (e.g., VGG series) to validate our approach. The results demonstrate that our strategy significantly outperforms existing baselines, reducing average execution time by 58.6%, energy consumption by 61.8%, and cloud cost by 36.1%.
2. Materials and Methods
2.1. System Model
- The user equipment: This refers to the physical device that generates the data and ultimately consumes the computation results;
- Edge Server: This refers to the computing infrastructure deployed at the network edge;
- Cloud Server: This refers to virtualized or physical server clusters located in remote data centers, possessing powerful and elastic computational resources.
2.2. DNNs Computing Task Model
2.3. Load Model
2.4. Task Offloading Time Model
2.5. Task Offloading Energy Consumption Model
2.6. Task Offloading Cost Model
2.7. Problem Formulation
3. Task Offloading Strategy Based on Multi-Objective Particle Swarm Optimization Algorithm
3.1. Multi-Objective Particle Swarm Optimization
3.2. Algorithm Design
3.2.1. Hybrid Encoding Scheme and Justification
- The number of servers to rent (Server Count);
- The hardware type for each server (Server Type);
- The purchasing model (e.g., on-demand or reserved) for each server (Purchase Mode);
- The mapping of each DNN layer task to a local device (Task Mapping).
3.2.2. Transformation Rules and Examples
3.3. Simulation Algorithm
Algorithm 1 Simulation Algorithm | |
Input: Task topology diagram task, execution_time of each layer task on different devices, task_ allocation method, instance type instance_id, task quantity task_ num, output data volume task_out for each layer, input data volume task_ in for the first layer | |
Data volume size | |
Output: The completion time of each layer task is finish_time, and the usage time of the ordered instance is time_stpan | |
1. | The current task is set as the first level task |
2. | while (there are tasks that have not been computed) |
3. | if (whether the previous task is the first layer task) |
4. | The ready time of the current task is set to the transmission time of task_in; |
5. | end if |
6. | The start time of the current task=max (the readiness time of the current task and the readiness time of the server assigned by the current task); |
7. | completion time=start time+execution time; |
8. | The usage time of the server assigned to the current task=the usage time of the server assigned to the current task + the task execution time; |
9. | Determine the next level task of the current task through the adjacency matrix; |
10. | Calculate the transmission time of the current layer’s output data volume based on the server type assigned by the next layer task; |
11. | The readiness time of the next layer task=the completion time of the current task+the transmission time of the output data volume; |
12. | end while |
13. | return the completion time of each layer task and the usage time of the ordered server; |
3.4. The Overall Flow of the Algorithm
Algorithm 2 Multi-objective optimization algorithm based on particle swarm optimization | |
Input: algorithm iteration times MaxNum, population size particle size, convolutional neural network C, server package annual rental price reserved_price, server on-demand rental price on_demand_price, server energy consumption rate W, data transmission rate translate_rate; | |
Output: Energy consumption, time consumption, price optimization offloading plan best and scheduling fitness value best_ralue | |
1. | Randomly initialize the particle swarm, with particles obtaining initialization position xx and flying speed v; |
2. | Offloading strategy generation algorithm, transforming particle positions into offloading strategy schemes; |
3. | Calculate the particle initialization fitness value f; |
4. | Initialize the individual optimal fitness value f_best and position x of the particles; |
5. | Find the optimal fitness value Gbest_ralue and position Gbest for the group; |
6. | The best historical value is best all = Gbest value |
7. | Identify the positions p1, p2, and p3 of the top three particles in the group; |
8. | Initialization temperature T = max (f) - min (f); |
9. | Initialization speed a = 0.9; |
10. | while (T == 0) |
11. | Reinitialize the particle swarm algorithm; |
12. | Reinitialize temperature T; |
13. | end while |
14. | time_num = MaxNum; |
15. | time = 1; |
16. | while(time != time_num) |
17. | for k = 1:particlesize |
18. | Update particle velocity based on the position of the top three particles; |
19. | Speed out of bounds processing, which ensures that each dimension of particle velocity is between - vmax and vmax; |
20. | Update particle position xx; |
21. | Particle out of bounds processing, which ensures that each dimension of particle position is between 0 and 1; |
22. | Offloading strategy generation algorithm, transforming particle positions into offloading strategy schemes; |
23. | Calculate the fitness value f; |
24. | Compare the size of f_best and f, accept the better value and position with a probability of 1, and accept the worse value and position with a probability of p; |
25. | end for |
26. | Find the optimal fitness value Gbest1_value and position Gbest1 for the group; |
27. | Identify the positions p1, p2, and p3 of the top three particles in the group; |
28. | Compare the size of Gbest-value and Gbest1_value, accept the better value and position with a probability of 1, and accept the worse value and position with a |
29. | probability of p; |
30. | Cooling T = aT; |
31. | time = time + 1; |
32. | if (time == time_num && Gbest_value > best_all) |
33. | the optimal iteration result is worse than the historical best, increase the number of iterations and iterate again; |
34. | time_num = time_num + MaxNum; |
35. | T = T/a; |
36. | a = 0.9a; Accelerate the cooling speed |
37. | If (a < 0.5) |
38. | If there are too many iterations and the optimal iteration value has not been obtained, reinitialize the iteration; |
39. | Randomly initialize particle position xx and particle flight velocity v again; |
40. | Calculate the particle fitness value f; |
41. | Reinitialize the initial temperature T; |
42. | end if |
43. | end if |
44. | if (Gbest_value < best_all) |
45. | best_all = Gbest_value; |
46. | end if; |
47. | end while |
4. Simulation Experiment and Analysis
4.1. Experimental Environment and Parameter Settings
4.2. Experimental Results and Analysis
5. Conclusions
Author Contributions
Funding
Institutional Review Board Statement
Informed Consent Statement
Data Availability Statement
Conflicts of Interest
References
- Sathiyapriya, G.; Anita Shanthi, S. Image Classification Using Convolutional Neural Networks. In Proceedings of the 2022 First International Conference on Electrical, Electronics, Information and Communication Technologies (ICEEICT), Trichy, India, 16–18 February 2022. [Google Scholar]
- Ayadi, S.; Lachiri, Z. Deep neural network architectures for audio emotion recognition performed on song and speech modalities. Int. J. Speech Technol. 2023, 26, 1165–1181. [Google Scholar] [CrossRef]
- Olusegun, R.; Oladunni, T.; Audu, H.; Houkpati, Y.; Bengesi, S. Text Mining and Emotion Classification on Monkeypox Twitter Dataset: A Deep Learning-Natural Language Processing (NLP) Approach. IEEE Access 2023, 11, 49882–49894. [Google Scholar] [CrossRef]
- Seo, W.; Kim, S.; Hong, S. Partitioning Deep Neural Networks for Optimally Pipelined Inference on Heterogeneous IoT Devices with Low Latency Networks. In Proceedings of the 2024 IEEE 44th International Conference on Distributed Computing Systems (ICDCS), Jersey, NJ, USA, 23–26 July 2024; pp. 1255–1279. [Google Scholar]
- Wang, Y.; Chen, M.; Li, Z.; Hu, Y. Joint Allocations of Radio and Computational Resource for User Energy Consumption Minimization Under Latency Constraints in Multi-Cell MEC Systems. IEEE Trans. Veh. Technol. 2023, 72, 3304–3320. [Google Scholar] [CrossRef]
- Jafari, V.; Rezvani, M.H. Joint optimization of energy consumption and time delay in IoT-fog-cloud computing environments using NSGA-II metaheuristic algorithm. J. Ambient. Intell. Humaniz. Comput. 2023, 14, 1675–1698. [Google Scholar] [CrossRef]
- Huang, J.; Wan, J.; Lv, B.; Ye, Q.; Chen, Y. Joint Computation Offloading and Resource Allocation for Edge-Cloud Collaboration in Internet of Vehicles via Deep Reinforcement Learning. IEEE Syst. J. 2023, 17, 2500–2511. [Google Scholar] [CrossRef]
- Chen, H.; Hu, Y. Task offloading model under 5G edge cloud collaborative distributed network architecture. Mob. Commun. 2021, 45, 144–148. [Google Scholar]
- Xu, M.; Qian, F.; Zhu, M.; Huang, F.; Pushp, S.; Liu, X. Deepwear: Adaptive local offloading for on-wearable deep learning. IEEE Trans. Mob. Comput. 2019, 19, 314–330. [Google Scholar] [CrossRef]
- Gao, H.; Li, X.; Zhou, B.; Liu, X.; Xu, J. Energy efficient computing task offloading strategy for deep neural networks in mobile edge computing. Comput. Integr. Manuf. Syst. 2020, 26, 1607–1615. [Google Scholar]
- Li, Z. Research on Computing Offloading and Resource Allocation Algorithm Based on MEC. Master’s Thesis, Nanjing University of Posts and Telecommunications, Nanjing, China, March 2021. [Google Scholar]
- Hu, Y.; Wang, H.; Wang, L.; Hu, M.; Peng, K.; Veeravalli, B. Joint deployment and request routing for microservice call graphs in data centers. IEEE Trans. Parallel Distrib. Syst. 2023, 34, 2994–3011. [Google Scholar] [CrossRef]
- Gao, H. Research on Edge Cloud Collaborative Computing Framework and Unloading Strategy for Deep Learning Applications. Ph.D. Thesis, Anhui University, Hefei, China, 2021. [Google Scholar]
- Elouali, A.; Mora, H.M.; Mora-Gimeno, F.J. Data transmission reduction formalization for cloud offloading-based IoT systems. J. Cloud Comput. 2023, 12, 48. [Google Scholar] [CrossRef] [PubMed]
- Liu, C.; Li, K.; Li, K. Minimal Cost Server Configuration for Meeting Time-Varying Resource Demands in Cloud Centers. IEEE Trans. Parallel Distrib. Syst. 2018, 29, 2503–2513. [Google Scholar] [CrossRef]
- Teymoori, P.; Todd, T.D.; Zhao, D.; Karakostas, G. Efficient Mobile Computation Offloading with Hard Task Deadlines and Concurrent Local Execution. In Proceedings of the GLOBECOM 2020, Taipei, Taiwan, 7–11 December 2020; pp. 1–6. [Google Scholar]
- Lipsa, S.; Dash, R.K.; Ivkovic, N.; Cengiz, K. Task Scheduling in Cloud Computing: A Priority-Based Heuristic Approach. IEEE Access 2023, 11, 27111–27126. [Google Scholar] [CrossRef]
- Geng, L.; Zhao, H.; Wang, J.; Kaushik, A.; Yuan, S.; Feng, W. Deep-reinforcement-learning-based distributed computation offloading in vehicular edge computing networks. IEEE Internet Things J. 2023, 10, 12416–12433. [Google Scholar] [CrossRef]
- Liu, H.; Zhao, H.; Geng, L.; Feng, W. A policy gradient based offloading scheme with dependency guarantees for vehicular networks. In Proceedings of the 2020 IEEE Globecom Workshops (GC Wkshps), Taipei, Taiwan, 7–11 December 2020. [Google Scholar]
- Chen, J.; Yi, C.; Wang, R.; Zhu, K.; Cai, J. Learning aided joint sensor activation and mobile charging vehicle scheduling for energy-efficient WRSN-based industrial IoT. IEEE Trans. Veh. Technol. 2023, 72, 5064–5078. [Google Scholar] [CrossRef]
- Yan, L.; Chen, H.; Tu, Y.; Zhou, X. A Task Offloading Algorithm With Cloud Edge Jointly Load Balance Optimization Based on Deep Reinforcement Learning for Unmanned Surface Vehicles. IEEE Access 2022, 10, 16566–16576. [Google Scholar] [CrossRef]
- Ren, J.; Yu, G.; Cai, Y.; He, Y. Latency optimization for resource allocation in mobile-edge computation offloading. IEEE Trans. Wirel. Commun. 2018, 17, 5506–5519. [Google Scholar] [CrossRef]
- Tang, T.T.; Li, C.; Liu, F.G. Collaborative cloud-edge-end task offloading with task dependency based on deep reinforcement learning. Comput. Commun. 2023, 209, 78–90. [Google Scholar] [CrossRef]
- Ren, J.; Yu, G.; Cai, Y.; He, Y.; Qu, F. Partial offloading for latency minimization in mobile-edge computing. In Proceedings of the GLOBECOM 2017, Singapore, 4–8 December 2017; pp. 1–6. [Google Scholar]
- Talbi, E.-G. Combining metaheuristics with mathematical programming, constraint programming and machine learning. Ann. Oper. Res. 2016, 240, 171–215. [Google Scholar] [CrossRef]
- Xue, M.; Wu, H.; Peng, G.; Wolter, K. DDPQN: An Efficient DNN Offloading Strategy in Local-Edge-Cloud Collaborative Environments. IEEE Trans. Serv. Comput. 2022, 15, 640–655. [Google Scholar] [CrossRef]
- Wang, D.; Wang, Z.; Yu, L.; Wu, Y.; Yang, J.; Mei, K.; Wang, J. A Survey of Stochastic Computing in Energy-Efficient DNNs On-Edge. In Proceedings of the ISPA/BDCloud/SocialCom/SustainCom 2021, New York, NY, USA, 30 September–3 October 2021; pp. 1554–1561. [Google Scholar]
- Guo, F.; Tang, B.; Tang, M. Joint optimization of delay and cost for microservice composition in mobile edge computing. World Wide Web 2022, 25, 2019–2047. [Google Scholar] [CrossRef]
- Xu, B.; Deng, T.; Liu, Y.; Zhao, Y.; Xu, Z.; Qi, J.; Wang, S.; Liu, D. Optimization of cooperative offloading model with cost consideration in mobile edge computing. Soft Comput. 2023, 27, 8233–8243. [Google Scholar] [CrossRef]
- Yuan, H.; Zheng, Z.; Bi, J.; Zhang, J.; Zhou, M. Energy-Optimized Task Offloading with Genetic Simulated-Annealing-Based PSO for Heterogeneous Edge and Cloud Computing. Proc. IEEE Int. Conf. Syst. Man Cybern. 2024, 13, 1446–1470. [Google Scholar]
- Li, Z.; Yu, H.; Fan, G.; Zhang, J.; Xu, J. Energy-efficient offloading for DNN-based applications in edge-cloud computing: A hybrid chaotic evolutionary approach. J. Parallel Distrib. Comput. 2024, 187, 104850. [Google Scholar] [CrossRef]
- Zhou, W.; Chen, L.; Tang, S.; Lai, L.; Xia, J.; Zhou, F.; Fan, L. Offloading strategy with PSO for mobile edge computing based on cache mechanism. Clust. Comput. 2022, 25, 2389–2401. [Google Scholar] [CrossRef]
- Kaya, V.; Akgül, İ. VGGNet model yapilari kullanılarak cilt kanserinin siniflandirilmasi. GUFBD/GUJS 2023, 13, 190–198. [Google Scholar]
Parameter Name | Parameter Value |
---|---|
Number of iterations | 20 |
Data volume size | [0.01, 100] MB |
Particle swarm size (particlesize) | [50, 100] |
Individual empirical learning factors | 1, 1, 1, 3, 4, 3 |
LAN bandwidth | 200 Mbps |
WAN bandwidth | 10 Mbps |
Cooling factor (alpha) | 0.9 |
Inertia factor (w) | [0.4, 0.9] |
Maximum flight speed (vmax, vvmax) | 0.5 |
Server subscription cost weights (f1) | 0.5 |
Execution time weights (f2) | 0.3 |
Energy consumption weights (f3) | 0.2 |
Disclaimer/Publisher’s Note: The statements, opinions and data contained in all publications are solely those of the individual author(s) and contributor(s) and not of MDPI and/or the editor(s). MDPI and/or the editor(s) disclaim responsibility for any injury to people or property resulting from any ideas, methods, instructions or products referred to in the content. |
© 2025 by the authors. Licensee MDPI, Basel, Switzerland. This article is an open access article distributed under the terms and conditions of the Creative Commons Attribution (CC BY) license (https://creativecommons.org/licenses/by/4.0/).
Share and Cite
Yang, L.; Wang, S.; Zhang, W.; Jing, B.; Yu, X.; Tang, Z.; Wang, W. Task Offloading Strategy of Multi-Objective Optimization Algorithm Based on Particle Swarm Optimization in Edge Computing. Appl. Sci. 2025, 15, 9784. https://doi.org/10.3390/app15179784
Yang L, Wang S, Zhang W, Jing B, Yu X, Tang Z, Wang W. Task Offloading Strategy of Multi-Objective Optimization Algorithm Based on Particle Swarm Optimization in Edge Computing. Applied Sciences. 2025; 15(17):9784. https://doi.org/10.3390/app15179784
Chicago/Turabian StyleYang, Liping, Shengyu Wang, Wei Zhang, Bin Jing, Xiaoru Yu, Ziqi Tang, and Wei Wang. 2025. "Task Offloading Strategy of Multi-Objective Optimization Algorithm Based on Particle Swarm Optimization in Edge Computing" Applied Sciences 15, no. 17: 9784. https://doi.org/10.3390/app15179784
APA StyleYang, L., Wang, S., Zhang, W., Jing, B., Yu, X., Tang, Z., & Wang, W. (2025). Task Offloading Strategy of Multi-Objective Optimization Algorithm Based on Particle Swarm Optimization in Edge Computing. Applied Sciences, 15(17), 9784. https://doi.org/10.3390/app15179784