You are currently viewing a new version of our website. To view the old version click .
Mathematics
  • Article
  • Open Access

2 July 2023

A New Gaining-Sharing Knowledge Based Algorithm with Parallel Opposition-Based Learning for Internet of Vehicles

,
,
,
and
1
College of Computer Science and Engineering, Shandong University of Science and Technology, Qingdao 266590, China
2
Department of Information Management, Chaoyang University of Technology, 168, Jifeng E. Rd., Wufeng District, Taichung 41349, Taiwan
3
College of Science and Engineering, Flinders University, Bedford Park, SA 5042, Australia
4
College of Computer and Data Science, Fuzhou University, Xueyuan Road No.2, Fuzhou 350116, China

Abstract

Heuristic optimization algorithms have been proved to be powerful in solving nonlinear and complex optimization problems; therefore, many effective optimization algorithms have been applied to solve optimization problems in real-world scenarios. This paper presents a modification of the recently proposed Gaining–Sharing Knowledge (GSK)-based algorithm and applies it to optimize resource scheduling in the Internet of Vehicles (IoV). The GSK algorithm simulates different phases of human life in gaining and sharing knowledge, which is mainly divided into the senior phase and the junior phase. The individual is initially in the junior phase in all dimensions and gradually moves into the senior phase as the individual interacts with the surrounding environment. The main idea used to improve the GSK algorithm is to divide the initial population into different groups, each searching independently and communicating according to two main strategies. Opposite-based learning is introduced to correct the direction of convergence and improve the speed of convergence. This paper proposes an improved algorithm, named parallel opposition-based Gaining–Sharing Knowledge-based algorithm (POGSK). The improved algorithm is tested with the original algorithm and several classical algorithms under the CEC2017 test suite. The results show that the improved algorithm significantly improves the performance of the original algorithm. When POGSK was applied to optimize resource scheduling in IoV, the results also showed that POGSK is more competitive.

1. Introduction

Heuristic optimization algorithms have been studied and discovered over the past three decades and have been fully proven to solve a variety of complex, nonlinear optimization problems. These methods are both user-friendly and do not necessitate mathematical analysis of the optimization problem. Compared with the traditional methods, they have the advantages of flexibility, no gradient mechanism and avoiding being trapped in the local optimum [1]. These features drew a large number of researchers to participate in the design. Most heuristic algorithms are inspired by the author’s observation of animal and plant phenomena in nature. To solve optimization problems, algorithms are used to simulate growth and evolution. In real scenarios, heuristics are also widely used, such as path planning [2], wireless sensor network localization problem [3], wireless sensor network routing problem [4], airport gate assignment [5] and cloud computing workflow scheduling [6].
Heuristic algorithms can be divided into four categories [7]. The first category is algorithms based on swarm intelligence techniques. Much of the inspiration for such algorithms comes from observations of social animals. In a population, each individual exhibits a certain degree of independence while still interacting with the entire group. The main representative algorithms are particle swarm optimization (PSO) [8], the phasmatodea population evolution algorithm (PPE) [9], the Gannet optimization algorithm (GOA) [10], the grey wolf optimizer (GWO) [11], cat swarm optimization (CSO) [12], etc. The second category is algorithms based on evolutionary techniques. Such algorithms are inspired by developments in biology. The initial random population is gradually iterated to achieve the final optimization purpose through crossover, mutation, selection and other operations. The main representative algorithms are the genetic algorithm (GA) [13], the differential evolution algorithm (DE) [14], the quantum evolutionary algorithm (QEA) [15], etc. The third category is the algorithm based on physical phenomena. This kind of algorithm simulates the law of some natural phenomena. The main representative algorithms are the Archimedes optimization algorithm (AOA) [16], the simulated annealing algorithm (SAA) [17], the sine cosine algorithm (SCA) [18], etc. The fourth category of algorithms is those based on human-related technology. As an independent intelligent and rational individual, each person has unique physical and psychological behavior. The main representative algorithms are the Teaching–Learning-Based Optimization Algorithm (TLBO) [19], the Gaining–Sharing Knowledge-based algorithm (GSK) [7], etc.
The original GSK simulates the behavior of acquiring and sharing knowledge throughout a person’s life, culminating in the maturation of the individual [20]. The author divides human life into two distinct phases: the junior phase, which corresponds to childhood, and the senior phase, which corresponds to adulthood. The strategies for knowledge acquisition and sharing are different in these two phases. At the beginning of the algorithm, individuals tend to use a relatively naive method to acquire and share knowledge. However, not all disciplines (on all dimensions of the solution) use this naive method. In some disciplines, individuals will also use relatively advanced methods for knowledge acquisition and sharing. With the growth of individuals, the algorithm enters the middle stage and the learning of knowledge is more inclined to use the advanced method, while a few disciplines still use the naive method. Individuals go through two stages, alternating between naive or advanced strategies to update their knowledge in each discipline. Individuals eventually reach maturity, which is when they find their optimal position. Ali Wagdy Mohamed demonstrated its powerful optimization capabilities on the CEC2017 test suite when he presented the GSK algorithm. Although the GSK algorithm demonstrates excellent convergence in solving the optimization problem, there is room for improvement in avoiding locally optimal solutions and convergence speed. To further improve the performance of the GSK algorithm, we propose several approaches to be incorporated into the GSK algorithm, which are described next in turn. Experiments have been conducted to demonstrate that these approaches are effective in improving the performance of the GSK algorithm.
Parallel processing is concerned with producing the same results using multiple processors with the goal of reducing the running time [21]. Because physical parallel processing cannot be used in the optimization algorithm, we adopt an alternative approach. The main idea of the parallel mechanism is to divide the initial population into several different groups. Each group performs iterative updates independently and communicates regularly between groups. The parallel mechanism has been applied widely, including to the parallel particle swarm algorithm (Chu S C 2005) [21] and parallel genetic algorithms [22]. In addition, parallel strategy is also used in multi-objective optimization algorithms. Cai D proposed an evolutionary algorithm based on uniform and contraction for many-objective optimization [23], which uses a parallel mechanism to enhance the local search ability.
The communication strategies between groups can be varied for optimizing different algorithms. This paper presents a communication strategy using the Taguchi method. The main idea is to use a pre-designed orthogonal table for crossover experiments. Compared with the traditional experimental method, it can achieve almost the same effect while obviously reducing the number of experiments. The Taguchi method has the advantages of reducing the number of experiments, reducing the cost of experiments and improving the efficiency of experiments [24]. It has been successfully applied to improve the genetic algorithm (Jinn-Tsong Tsai 2004) [25], the Archimedes optimization algorithm (Shi-Jie Jiang 2022) [26], the cat swarm optimization algorithm (Tsai P W 2012) [24], etc. In addition, opposition-based learning (OBL) was also incorporated into GSK. The concept of OBL was proposed by Tizhoosh (2005) [27]. After that, some classical optimization algorithms started to introduce this idea. It has been successfully applied to improve grey wolf optimization (Souvik Dhargupta 2020) [28] (Dhargupta S 2020) [29], the differential evolution algorithm (Rahnamayan S 2008) [30] (Wu Deng2021) [31], particle swarm optimization (Wang H 2011) [32], the grasshopper optimization algorithm (Ahmed A. Ewees 2018) [33], etc.
In order to reach a convincing conclusion, the performance of any optimization or evolutionary algorithm can only be judged via extensive benchmark function tests [34]. Some diverse and difficult test problems are required for this purpose and the CEC2017 test suite [35] is a widely accepted test problem. In order to apply the optimization algorithm to the complex real-world optimization problem, it is necessary that the optimization algorithm can effectively solve the single objective optimization problem. CEC2017 test suite contains 30 single-objective real-parameter numerical optimization questions. Compared with CEC2013 and CEC2014, in CEC2017, several test problems with new characteristics are proposed, such as new basic problems, composing test problems by extracting features dimension-wise from several problems, graded level of linkages, rotated trap problems and so on. In this paper, the CEC2017 test suite is utilized to evaluate the proposed algorithm (POGSK), the original algorithm and several classical algorithms.
The IoV enables vehicles on the road to exchange information with roadside units (RSUs) [36]. Therefore, users can expect quick, comprehensive and convenient services, such as road condition information, traffic jam section information, traffic condition information, city entertainment information, etc. However, traditional resource-allocation strategies may not be able to provide satisfactory Quality of Service (QoS) due to several factors. These include resource constraints, network transmission delays and the deployment of RSUs. Scheduling problems can be solved using various methods, which can be roughly grouped into three categories: exact, approximate and heuristic [37]. The exact method is to calculate all the solutions of the whole search space to find the optimal solution, which is obviously only suitable for small-scale problems. The approximation method uses certain mathematical rules to find the optimal solution, which requires different analyses for different problems. However, in most cases, the mathematical analysis of the problem is difficult. Therefore, the heuristic approach is a decent option. The main objective of scheduling algorithms is to find the best resources in the cloud for the applications (tasks) of the end user. This improves the QoS parameters and resource utilization [38]. In order to solve this optimization problem, this paper proposes using the heuristic algorithm POGSK to complete resource scheduling.
The main contributions of this paper are as follows:
1. An improved Gaining–Sharing Knowledge-based algorithm (POGSK) is proposed, which uses parallel strategy and OBL strategy. The use of parallel strategy increases the diversity of the population so that the algorithm can effectively avoid local optimal solutions. The OBL strategy can correct the convergence direction and improve the convergence accuracy.
2. A new inter-group communication strategy is designed. Specifically, the Taguchi communication strategy and the population-merger communication strategy were used. This enables efficient exchange of information between subpopulations and avoids the weakening of algorithm performance caused by the reduction of the number of individuals in subpopulations.
3. POGSK is used in the resource-scheduling problems of the IoV to improve QoS, which can reflect the performance of the algorithm in real scenarios. Simulation results show that POGSK is more competitive than other algorithms.

3. Proposed Algorithm and Its Application

3.1. Parallel Communication Strategy

Choosing a suitable communication strategy is critical in parallel strategy since it facilitates information exchange between two groups, thereby enhancing their search capabilities. In this paper, two primary communication strategies are used. The first primary communication strategy is controlled by the communication control factor R. If the random number generated in the communication is greater than R, all groups will be matched in pairs and the following Taguchi communication will be performed:
1. Select the optimal solution of the two groups and select the appropriate orthogonal table based on the number of levels and factors. For example, the experiment is two-level if it contains two candidates and is seven-factor if each candidate in the experiment contains seven influencing factors. Conduct the experiment according to the orthogonal table and calculate the fitness value for each new individual.
2. In each dimension, calculate the fitness sum of the two levels separately. For each dimension, select the level with better fitness as a candidate. Combine the candidates to produce an optimal individual.
When the random number is less than R, the optimal solutions within all groups are compared with the global optimal solution using the following steps:
1. If it is worse than the global optimal solution, the intra-group optimal solution is replaced by the global optimal solution.
2. If it is better than or equal to the global optimal solution, a random mutation operation is performed.
Every individual has the potential to excel in some dimensions. The Taguchi method can efficiently excavate these excellent dimensions and then combine them together. The communication process described above will be visualized through an example. The fitness function is assumed to be:
f ( x ) = i = 1 n x 2
Suppose the search goal is to find the smallest fitness value. The two candidate individuals are shown in Table 1. The Taguchi orthogonal experiment is a two-level seven-factor experiment that employs the L 8 ( 2 7 ) orthogonal table shown in Equation (3). Table 2 depicts the specific operation process. The cumulative fitness value of the two candidate solutions in the table is calculated based on whether the solution is selected in the orthogonal table. In the first dimension of Table 2, the candidate solution x 2 was used in experiments 5 to 8, the cumulative fitness value for the first dimension of x 2 is the sum of the fitness values from 5 to 8 experiments.
Table 1. The position of the candidate individuals.
Table 2. The Taguchi method is used to produce better individuals.
In addition, this paper employs the population-merger communication strategy as the second primary communication strategy. In a swarm intelligence algorithm with parallel strategies, multiple subpopulations search independently and communicate with each other at intervals. The parallel strategy increases the diversity of the algorithms, but reduces the number of individuals in each subpopulation and some algorithms require more individual data for search. This conflict weakens the performance of the algorithm. After testing, the search performance of the original GSK algorithm was significantly reduced when the algorithm was divided into several subpopulations. In order to solve the above problem, the population-merger communication strategy was adopted. Specifically, each subpopulation searched independently in the early stage of the algorithm. Once a specific condition is met, the two adjacent subpopulations combine into one population. The newly formed population incorporates all the information from both subpopulations. Finally, before the end of the algorithm, all the individuals are merged into one population, which contains all the information of the original subpopulation. In the first stage of this paper, the initial GSK population was divided into four groups. In the second stage, the four groups were merged into two groups. In the third stage, the two groups are merged into one group. The condition of merger refers to the number of fitness functions.

3.2. Incorporate OBL into GSK

There are two primary steps involved in adding OBL to GSK. Using OBL, optimize the initial population as the first step. In the second step, the opposite population is generated to correct the convergence direction.
For the original algorithm (GSK), the initial population is randomly generated within a defined range. The initial individuals thus generated may be too far away from the global optimal position. The utilization of the OBL strategy enables the generation of a population closer to the optimal position, thereby facilitating more effective algorithm optimization. The steps in detail are as follows:
1. Initialize the population X = { x 1 , x 2 , …, x n } randomly according to the defined range, n denotes the number of individuals in the population. Generate the opposing population X o p = { x 1 o p , x 2 o p , …, x n o p } with the following formula:
x i , j o p = a j + b j x i , j i = 1 , 2 , n ; j = 1 , 2 , , D ;
where a j represents the upper bound of the current dimension and b j represents the lower bound.
2. Select individuals with excellent fitness from { x i , x i o p } and combine them into NP, taking NP as the initial population.
In the process of population updating, by using similar methods to generate the opposite population and for evaluation, the current population can be guaranteed to be closer to the global optimal position. The probability of generating an opposing population can be controlled by adjusting the jump rate r. The steps in detail are as follows:
1. After each update of the population, a random number is generated to compare with the jump rate r. If the random number is less than r, the opposite population of the current population will be generated, the following formula shows the process:
x i , j o p = m a x j + m i n j x i , j i = 1 , 2 , n ; j = 1 , 2 , , D ;
where m a x j represents the maximum value of the j dimension in the current population and m i n j represents the minimum value.
2. Current and opposing populations are combined, and the fitness is tested separately. Then, the fittest individuals are selected from { x i , x i o p }.
In this paper, several different approaches are considered for integration with GSK and Figure 1 illustrates the process. In POGSK, the initial population is divided into four subpopulations after OBL optimization and the four subpopulations independently conduct GSK and communicate with each generation according to the Taguchi method. After updating the current population, the OBL operation is performed. The pseudocode for POGSK is shown in Algorithm 4. Moreover, in order to demonstrate the POGSK process more visually, Figure 2 shows its main processes.
Algorithm 4: POGSK pseudo-code
Mathematics 11 02953 i004
Figure 1. The approaches of POGSK.
Figure 2. The flowchart of POGSK.

3.3. Apply the POGSK to Solve the Resource-Scheduling Problem in IoV

In order to reasonably allocate the resources of RSUs in IoV and enhance the QoS, this paper proposes the following mathematical model. Assume that there are multiple vehicles on the road and a total of n tasks are submitted simultaneously and each task contains four attributes: (1) the size of the task; (2) deadlines for tasks; (3) type and quantity of resources required; (4) the transfer time of the submitted work. The n tasks are represented as follows [36]:
T = { T i } ( i = 1 , 2 , . . , n )
Suppose that there are m processing nodes in the current scenario. Each processing unit contains two attributes: (1) type and quantity of resources owned by the processing unit and (2) the processing capacity of the processing unit. The m processing nodes are represented as follows [36]:
P = { P j } ( i = 1 , 2 , . . , m )
  • Service delay
    In order to provide users with faster services, service latency should be as short as possible. A processing node can handle multiple tasks simultaneously, with different processing capabilities for each node. Then, the processing time of a task on the processing node is [36]
    D O P i j = S i H j
    where S i represents the size of the task T i and H j represents the processing capability of the processing node P j . Then, the time required for a processing node to complete all the tasks assigned to it is
    P T j = i = 1 n D O P i j C ( i , j )
    where C(i,j) is a binary value and indicates whether task T i is assigned to node P j , denoted as
    C ( i , j ) = 0 , o t h e r w i s e . 1 , T i i s a s s i g n e d t o P j
    The sum of the processing delays of all nodes is
    F T = j = 1 m P T j
  • Resource utilization
    According to research, the energy consumption of the server in the idle state can account for more than 60% of the full load operation [42], which leads to a large amount of energy wasted on the idle server. So we want the roadside unit to be as resource-efficient as possible. The service request in the IoV requires the support of four kinds of computer resources, namely CPU, memory, disk and bandwidth. We need to pay attention to all four sources. Then, the total resource utilization is
    F U = i = 1 n k = 1 4 C R ( i , j ) j = 1 m k = 1 4 ( P N j P R ( j , k ) )
    where CR(i,k) represents the number of r e s o u r c e s k required by T i , PR(j,k) represents the number of r e s o u r c e s k owned by P j and P N j is a binary value indicating whether P j is turned on or not (k = 1,2,3,4 represents four resources).
    P N j = 0 , o t h e r w i s e . 1 , P j i s r u n n i n g
  • Load balancing
    The service request in the IoV has different demands on different resources; it is easy to cause the load of different types of resources to be unbalanced. A valuable load-balancing technique in cloud computing can enhance the accuracy and efficiency of cloud computing performance [43]. So we want the processing unit to be as load-balanced as possible. Then, the utilization of r e s o u r c e k in P j is
    u j k = i = 1 n ( C R ( i , j ) C ( i , j ) ) P R ( j , k )
    where C(i,j) is calculated using Equation (16). The mean of resource utilization of P j is
    M U j = k = 1 4 u j k 4
    The variance of resource utilization of P j is
    V U j = k = 1 4 ( u j k M U j ) 4
    The average resource utilization variance for all processing units is
    F N = j = 1 m ( V U j k P N j ) z
    where P N j is calculated by Equation (19) and z represents the number of processing units opened.
  • Security
    In the IoV, tasks must be completed on schedule to ensure safety, as service requests are made at high speeds. In real scenarios, the network latency and security of the task is important [6,44]. Task deadlines will be sent with task submissions and we want as many tasks as possible to be completed on time. Then, the actual time required to complete the task is
    p s i = D O P i j + T L ( i , j )
    where D O P i j is calculated with Equation (14) and TL(i,j) represents the transmission buffer time from T i to P j . Then, whether the task is completed on time is expressed as a binary value:
    S i = 0 , c s i < p s i . 1 , c s i p s i
    where c s i indicates the deadtime of T i , which is uploaded when the task is submitted. Then, we express the degree of security as the successful execution rate of the task.
    F S = i = 1 n S i n
Considering the above four objectives, we propose the following fitness function:
f i t n e s s = a F T + b F U + c F N + d F S
For processing unit P j , the number of various resources required by all the tasks running on it is not permitted to exceed the number of resources owned by the unit. The workflow is shown in Figure 3. The constraint conditions are:
i = 1 n ( C ( i , j ) C R ( j , k ) ) < P R ( j , k ) j = 1 , 2 , , m ; k = 1 , 2 , 3 , 4
Figure 3. Scheduling model.

4. Results

4.1. Simulation Results on CEC2017

Single objective optimization algorithms are the basis of the complex optimization algorithm. It is considered effective to test with some classical mathematical functions. CEC2017 contains 30 benchmark functions to test the optimization ability of the algorithm. The F2 function was abandoned because of the dimension-setting problem. F1 and F3 are Unimodal Functions, F4–F10 are Simple Multimodal Functions, F11–F20 are Hybrid Functions, F21–F30 are Composition Functions. Set error = ( f i f i ) as the objective function, where f i is the actual value of the ith test function and f i is the minimum value of the ith test function. The optimization goal is to make the error as small as possible. Values of error and standard deviations less than 10 8 are considered as zero [35].
In this paper, POGSK is compared with the original algorithm GSK, PSO, DE and GWO. The Taguchi strategy in POGSK results in additional fitness function calls in each population generation. So for the sake of fairness, in this paper, the termination condition of the algorithm is set to the maximum number of function evaluations (NEFS) which is set to 10,000*problem_size. For example, the NEFS in a test with 30 variables is 300,000 times. The population size is set to 100 and the range of all test function solutions is set to [−100, 100]. Conduct 31 independent experiments each time to avoid special circumstances. The best results are marked in bold for all problems. The basic parameter settings of each algorithm are shown in Table 3.
Table 3. Parameter settings of each algorithm.
Table 4 shows experimental results of POGSK, GSK and DE over 31 independent runs on 29 test functions of 10 variables under CEC2017. Table 5 shows the experimental results of POGSK, GWO and PSO. Compared with the original algorithm GSK, POGSK obtains excellent results in 26 test functions, five of which reach the minimum value of the test function. In contrast to DE, POGSK obtains excellent results on 23 functions. In contrast to GWO, POGSK obtains excellent results on 25 functions. In contrast to PSO, POGSK obtains excellent results on 25 functions. In addition, it is worth noting that POGSK obtained 9 times better results on functions 21–30. This shows that it has excellent search ability on Composition Functions.
Table 4. Experimental results of POGSK, GSK and DE over 31 independent runs on 29 test functions of 10 variables under CEC2017.
Table 5. Experimental results of POGSK, PSO and GWO over 31 independent runs on 29 test functions of 10 variables under CEC2017.
Table 6 shows the experimental results of POGSK, GSK and DE over 31 independent runs on 29 test functions of 30 variables under CEC2017. Table 7 shows the experimental results of POGSK, GWO and PSO. Compared with the original algorithm GSK, POGSK obtains excellent results in 20 test functions, five of which reach the minimum value of the test function. In contrast to DE, POGSK obtains excellent results on 26 functions. Compared with GWO, POGSK obtains excellent results on 28 functions. In contrast to PSO, POGSK obtains excellent results on 25 functions. Furthermore, it is worth noting that POGSK obtained 7 times better results on functions 21–30. This again validates its excellent search ability on combinatorial functions.
Table 6. Experimental results of POGSK, GSK and DE over 31 independent runs on 29 test functions of 30 variables under CEC2017.
Table 7. Experimental results of POGSK, GWO and PSO over 31 independent runs on 29 test functions of 30 variables under CEC2017.
To demonstrate the algorithm performance of each CEC2017 test problem for multiple numbers of objective function evaluation allowances, we conducted further experiments setting the algorithm termination conditions to 0.1*maximum NFES, 0.3*maximum NFES and 0.5*maximum NFES. Continue using the basic parameters of each algorithm shown in Table 3 without change. Table 8 shows experimental results of POGSK, GSK and PSO over 31 independent runs on 10 variables for multiple numbers of objective function evaluation. Table 9 shows experimental results of POGSK, GWO and DE. For presentation purposes, only the mean fitness values for 31 independent runs of the algorithm are shown in Table 8 and Table 9. It can be observed that in comparison with the 1*max NFES termination condition, POGSK still shows a strong optimization performance when NFES is reduced. It is worth noting that POGSK shows a slight decrease in optimization performance compared to the PSO algorithm. In particular, when NFES = 0.1*Max NFES, POGSK outperforms PSO for only 18 functions. We believe this performance is reasonable because the optimization capability of the POGSK algorithm is not fully utilized when NFES is reduced.
Table 8. Experimental results of POGSK, GSK and PSO over 31 independent runs on 10 variables for multiple numbers of objective function evaluation.
Table 9. Experimental results of POGSK, GWO and DE over 31 independent runs on 10 variables for multiple numbers of objective function evaluation.
To better visualize the performance of POGSK, the convergence curves of the nine benchmark functions on 10 variables are shown in Figure 4 and the convergence curves of the nine benchmark functions on 30 variables are shown in Figure 5. The convergence curves of POGSK on functions 1–10 of 10 variables are not shown much because unimodal functions and simple multimodal functions are too simple to distinguish the search capability in the case of a few variables. We can see that in the middle and late stages of the algorithm, POGSK shows its powerful search ability to effectively avoid local optima. This reflects the fact that the addition of the OBL strategy and parallel strategy significantly enhances the search capability of the original algorithm. Through the above experimental comparison, it can be determined that POGSK has better capability in the CEC2017 test suite compared with GSK, PSO, DE and GWO.
Figure 4. Convergence curves of 9 functions on 10 variables.
Figure 5. Convergence curves of 9 functions on 30 variables.

4.2. Simulation Results on Resource-Scheduling Problems

In this paper, we used the fitness function proposed in Section 3.3 to test the optimized performance of POGSK in real scenarios. We consider the construction of an edge processing system consisting of ten processing units. The size of the tasks to be processed is a random distribution in the interval (0, 5] × 10 6 instructions. The maximum evaluation times were set as 300,000 times and the population size as 100. In order to avoid exceeding the constraint, the constraint test is carried out when the solution of the algorithm enters the fitness function. Specific constraint testing steps are as follows:
1. Each individual is represented as X i = { x i , 1 , x i , 2 , …, x i , m }, the assignment list { k 1 , k 2 , …, k m } is obtained by rounding each dimension of the individual. K i = b indicates that task i is assigned to node b. All nodes are traversed to find idle nodes and all nodes whose resource utilization is lower than 50%. The idle queues F and low resource utilization queues L are established, respectively. The node is traversed and the over-allocated node is found. Queue E j is established for the tasks on this node.
2. The tasks in queue E j are redistributed to the nodes in queue L or to F if L is empty. After each redistribution, the current node is checked to see if it is over-allocated. If not, the current operation is stopped and the process returns to step 3 until all nodes have been traversed. Based on the above results, the individuals need to be adjusted. For example, x i , 3 = 2.1 and k 3 = 2. After the constraint test, k 3 is adjusted to 6, then x i , 3 is updated randomly in the range [5.6, 6.4].
In this paper, we randomly generated 11 independent scenarios which submitted 30 tasks to the processing unit simultaneously. The best results are marked in bold for all scenarios. Table 10 shows other experimental parameters. These experimental parameters control the scene setting in the experiment. Each experiment was independently run 20 times. Table 11 shows the experimental performance of POGSK with GSK, GWO and PSO. You can see that out of the 11 experiments, POGSK won nine times. POGSK achieved excellent results in 9 scenarios compared to GSK. Compared with PSO, POGSK achieved excellent results in 9 scenarios. Compared with GWO, excellent results were obtained in 11 scenarios. This is due to the use of the parallel strategy and the OBL strategy. The Taguchi communication strategy allows the original GSK to effectively avoid falling into a local optimum. The population-merging communication strategy allows POGSK to not weaken algorithm performance due to the reduction in the number of individuals in the subpopulation. The use of the OBL strategy corrects the direction of convergence of the algorithm and increases the speed of convergence.
Table 10. Experimental parameter settings.
Table 11. A total of 30 tasks were assigned to 10 processing unit experiments.
Figure 6 shows the fitness function value as the number of test function call changes. We can still see that POGSK performs well in avoiding local optimality. It shows that POGSK also performs well in constrained realistic optimization problems.
Figure 6. Convergence curves of four situations.

5. Conclusions

In this paper, the POGSK algorithm is proposed to solve the resource-scheduling problem in the IoV. Based on the original algorithm, POGSK uses OBL and parallel strategy. The information exchange of subpopulations uses the Taguchi strategy and the population-merger strategy. By testing with the original algorithm and some classical algorithms on CEC2017, it is shown that the new algorithm has stronger searching ability. Then, we applied POGSK to the resource-scheduling problem and carried out the simulation test, which also showed better results.
In the future, we can continue to improve the inter-group communication strategy and enhance the search capability of the algorithm. We can also study the application of POGSK in multi-objective problems, engineering optimization problems and binary optimization problems. We believe the new algorithm can also achieve better results.

Author Contributions

Conceptualization, J.-S.P.; methodology, J.-S.P. and L.-F.L.; software, J.-S.P. and L.-F.L.; validation, L.-F.L. and S.-C.C.; investigation, L.-F.L.; resources, J.-S.P. and P.-C.S.; data curation, G.-G.L. and P.-C.S.; writing—original draft preparation, L.-F.L.; writing—review and editing, J.-S.P. and P.-C.S.; supervision, G.-G.L.; project administration, S.-C.C. All authors have read and agreed to the published version of the manuscript.

Funding

This research received no external funding.

Data Availability Statement

The datasets generated for this study are available on request to the corresponding author.

Conflicts of Interest

The authors declare no conflict of interest.

References

  1. Wang, H.; Wang, W.; Cui, Z.; Zhou, X.; Zhao, J.; Li, Y. A new dynamic firefly algorithm for demand estimation of water resources. Inf. Sci. 2018, 438, 95–106. [Google Scholar] [CrossRef]
  2. Song, B.; Wang, Z.; Zou, L. An improved PSO algorithm for smooth path planning of mobile robots using continuous high-degree Bezier curve. Appl. Soft Comput. 2021, 100, 106960. [Google Scholar] [CrossRef]
  3. Chai, Q.w.; Chu, S.C.; Pan, J.S.; Hu, P.; Zheng, W.M. A parallel WOA with two communication strategies applied in DV-Hop localization method. EURASIP J. Wirel. Commun. Netw. 2020, 2020, 50. [Google Scholar] [CrossRef]
  4. Wu, J.; Xu, M.; Liu, F.F.; Huang, M.; Ma, L.; Lu, Z.M. Solar Wireless Sensor Network Routing Algorithm Based on Multi-Objective Particle Swarm Optimization. J. Inf. Hiding Multim. Signal Process. 2021, 12, 1–11. [Google Scholar]
  5. Deng, W.; Xu, J.; Song, Y.; Zhao, H. Differential evolution algorithm with wavelet basis function and optimal mutation strategy for complex optimization problem. Appl. Soft Comput. 2021, 100, 106724. [Google Scholar] [CrossRef]
  6. Farid, M.; Latip, R.; Hussin, M.; Hamid, N.A.W.A. Scheduling scientific workflow using multi-objective algorithm with fuzzy resource utilization in multi-cloud environment. IEEE Access 2020, 8, 24309–24322. [Google Scholar] [CrossRef]
  7. Mohamed, A.W.; Hadi, A.A.; Mohamed, A.K. Gaining–sharing knowledge based algorithm for solving optimization problems: A novel nature-inspired algorithm. Int. J. Mach. Learn. Cybern. 2020, 11, 1501–1529. [Google Scholar] [CrossRef]
  8. Eberhart, R.; Kennedy, J. A new optimizer using particle swarm theory. In Proceedings of the MHS’95, Sixth International Symposium on Micro Machine and Human Science, Nagoya, Japan, 4–6 October 1995; pp. 39–43. [Google Scholar]
  9. Song, P.C.; Chu, S.C.; Pan, J.S.; Yang, H. Simplified Phasmatodea population evolution algorithm for optimization. Complex Intell. Syst. 2022, 8, 2749–2767. [Google Scholar] [CrossRef]
  10. Pan, J.S.; Zhang, L.G.; Wang, R.B.; Snášel, V.; Chu, S.C. Gannet optimization algorithm: A new metaheuristic algorithm for solving engineering optimization problems. Math. Comput. Simul. 2022, 202, 343–373. [Google Scholar] [CrossRef]
  11. Mirjalili, S.; Mirjalili, S.M.; Lewis, A. Grey wolf optimizer. Adv. Eng. Softw. 2014, 69, 46–61. [Google Scholar] [CrossRef]
  12. Chu, S.C.; Tsai, P.W.; Pan, J.S. Cat swarm optimization. In Proceedings of the PRICAI 2006: Trends in Artificial Intelligence: 9th Pacific Rim International Conference on Artificial Intelligence, Guilin, China, 7–11 August 2006; Proceedings 9. Springer: Berlin/Heidelberg, Germany; 2006, pp. 854–858. [Google Scholar]
  13. Holland, J.H. Genetic algorithms. Sci. Am. 1992, 267, 66–73. [Google Scholar] [CrossRef]
  14. Storn, R.; Price, K. Differential evolution-a simple and efficient heuristic for global optimization over continuous spaces. J. Glob. Optim. 1997, 11, 341. [Google Scholar] [CrossRef]
  15. Han, K.H.; Kim, J.H. Quantum-inspired evolutionary algorithm for a class of combinatorial optimization. IEEE Trans. Evol. Comput. 2002, 6, 580–593. [Google Scholar] [CrossRef]
  16. Hashim, F.A.; Hussain, K.; Houssein, E.H.; Mabrouk, M.S.; Al-Atabany, W. Archimedes optimization algorithm: A new metaheuristic algorithm for solving optimization problems. Appl. Intell. 2021, 51, 1531–1551. [Google Scholar] [CrossRef]
  17. Kirkpatrick, S.; Gelatt, C.D., Jr.; Vecchi, M.P. Optimization by simulated annealing. Science 1983, 220, 671–680. [Google Scholar] [CrossRef]
  18. Mirjalili, S. SCA: A sine cosine algorithm for solving optimization problems. Knowl.-Based Syst. 2016, 96, 120–133. [Google Scholar] [CrossRef]
  19. Rao, R.V.; Savsani, V.J.; Vakharia, D. Teaching–learning-based optimization: A novel method for constrained mechanical design optimization problems. Comput.-Aided Des. 2011, 43, 303–315. [Google Scholar] [CrossRef]
  20. Mohamed, A.W.; Abutarboush, H.F.; Hadi, A.A.; Mohamed, A.K. Gaining–sharing knowledge based algorithm with adaptive parameters for engineering optimization. IEEE Access 2021, 9, 65934–65946. [Google Scholar] [CrossRef]
  21. Chu, S.C.; Roddick, J.F.; Pan, J.S. A parallel particle swarm optimization algorithm with communication strategies. J. Inf. Sci. Eng 2005, 21, 809–818. [Google Scholar]
  22. Harada, T.; Alba, E. Parallel genetic algorithms: A useful survey. ACM Comput. Surv. CSUR 2020, 53, 1–39. [Google Scholar] [CrossRef]
  23. Cai, D.; Lei, X. A New Evolutionary Algorithm Based on Uniform and Contraction for Many-objective Optimization. J. Netw. Intell. 2017, 2, 171–185. [Google Scholar]
  24. Tsai, P.W.; Pan, J.S.; Chen, S.M.; Liao, B.Y. Enhanced parallel cat swarm optimization based on the Taguchi method. Expert Syst. Appl. 2012, 39, 6309–6319. [Google Scholar] [CrossRef]
  25. Tsai, J.T.; Liu, T.K.; Chou, J.H. Hybrid Taguchi-genetic algorithm for global numerical optimization. IEEE Trans. Evol. Comput. 2004, 8, 365–377. [Google Scholar] [CrossRef]
  26. Jiang, S.J.; Chu, S.C.; Zou, F.M.; Shan, J.; Zheng, S.G.; Pan, J.S. A parallel Archimedes optimization algorithm based on Taguchi method for application in the control of variable pitch wind turbine. Math. Comput. Simul. 2023, 203, 306–327. [Google Scholar] [CrossRef]
  27. Mahdavi, S.; Rahnamayan, S.; Deb, K. Opposition based learning: A literature review. Swarm Evol. Comput. 2018, 39, 1–23. [Google Scholar] [CrossRef]
  28. Yu, X.; Xu, W.; Li, C. Opposition-based learning grey wolf optimizer for global optimization. Knowl.-Based Syst. 2021, 226, 107139. [Google Scholar] [CrossRef]
  29. Dhargupta, S.; Ghosh, M.; Mirjalili, S.; Sarkar, R. Selective opposition based grey wolf optimization. Expert Syst. Appl. 2020, 151, 113389. [Google Scholar] [CrossRef]
  30. Rahnamayan, S.; Tizhoosh, H.R.; Salama, M.M. Opposition-based differential evolution. IEEE Trans. Evol. Comput. 2008, 12, 64–79. [Google Scholar] [CrossRef]
  31. Deng, W.; Shang, S.; Cai, X.; Zhao, H.; Song, Y.; Xu, J. An improved differential evolution algorithm and its application in optimization problem. Soft Comput. 2021, 25, 5277–5298. [Google Scholar] [CrossRef]
  32. Wang, H.; Wu, Z.; Rahnamayan, S.; Liu, Y.; Ventresca, M. Enhancing particle swarm optimization using generalized opposition-based learning. Inf. Sci. 2011, 181, 4699–4714. [Google Scholar] [CrossRef]
  33. Ewees, A.A.; Abd Elaziz, M.; Houssein, E.H. Improved grasshopper optimization algorithm using opposition-based learning. Expert Syst. Appl. 2018, 112, 156–172. [Google Scholar] [CrossRef]
  34. Mohamed, A.W.; Hadi, A.A.; Mohamed, A.K.; Awad, N.H. Evaluating the performance of adaptive gainingsharing knowledge based algorithm on CEC 2020 benchmark problems. In Proceedings of the 2020 IEEE Congress on Evolutionary Computation (CEC), Glasgow, UK, 19–24 July 2020; pp. 1–8. [Google Scholar]
  35. Awad, N.; Ali, M.; Liang, J.; Qu, B.; Suganthan, P. Problem Definitions and Evaluation Criteria for The CEC 2017 Special Session and Competition on Single Objective Real-Parameter Numerical Optimization; Technical Report; Nanyang Technological University: Singapore, 2016. [Google Scholar]
  36. Cao, B.; Zhang, J.; Liu, X.; Sun, Z.; Cao, W.; Nowak, R.M.; Lv, Z. Edge–Cloud Resource Scheduling in Space–Air–Ground-Integrated Networks for Internet of Vehicles. IEEE Internet Things J. 2022, 9, 5765–5772. [Google Scholar] [CrossRef]
  37. Ðurasević, M.; Jakobović, D. Heuristic and metaheuristic methods for the parallel unrelated machines scheduling problem: A survey. Artif. Intell. Rev. 2022, 56, 3181–3289. [Google Scholar] [CrossRef]
  38. Singh, S.; Chana, I. QoS-aware autonomic resource management in cloud computing: A systematic review. ACM Comput. Surv. CSUR 2015, 48, 1–46. [Google Scholar] [CrossRef]
  39. Yao, J. Research on Optimization Algorithm for Resource Allocation of Heterogeneous Car Networking Engineering Cloud System Based on Big Data. Math. Probl. Eng. 2022, 2022, 1079750. [Google Scholar] [CrossRef]
  40. Wang, Q.; Guo, S.; Liu, J.; Yang, Y. Energy-efficient computation offloading and resource allocation for delay-sensitive mobile edge computing. Sustain. Comput. Inform. Syst. 2019, 21, 154–164. [Google Scholar] [CrossRef]
  41. Filip, I.D.; Pop, F.; Serbanescu, C.; Choi, C. Microservices scheduling model over heterogeneous cloud-edge environments as support for IoT applications. IEEE Internet Things J. 2018, 5, 2672–2681. [Google Scholar] [CrossRef]
  42. Guo, M.; Li, L.; Guan, Q. Energy-efficient and delay-guaranteed workload allocation in IoT-edge-cloud computing systems. IEEE Access 2019, 7, 78685–78697. [Google Scholar] [CrossRef]
  43. Ullah, A.; Nawi, N.M.; Ouhame, S. Recent advancement in VM task allocation system for cloud computing: Review from 2015 to2021. Artif. Intell. Rev. 2022, 55, 2529–2573. [Google Scholar] [CrossRef]
  44. Cao, B.; Sun, Z.; Zhang, J.; Gu, Y. Resource allocation in 5G IoV architecture based on SDN and fog-cloud computing. IEEE Trans. Intell. Transp. Syst. 2021, 22, 3832–3840. [Google Scholar] [CrossRef]
Disclaimer/Publisher’s Note: The statements, opinions and data contained in all publications are solely those of the individual author(s) and contributor(s) and not of MDPI and/or the editor(s). MDPI and/or the editor(s) disclaim responsibility for any injury to people or property resulting from any ideas, methods, instructions or products referred to in the content.

Article Metrics

Citations

Article Access Statistics

Multiple requests from the same IP address are counted as one view.