Next Article in Journal
Personalization of Optimal Chemotherapy Dosing Based on Estimation of Uncertain Model Parameters Using Artificial Neural Network
Next Article in Special Issue
Data-Driven Approaches for Predicting and Forecasting Air Quality in Urban Areas
Previous Article in Journal
The Rapid Upper Limb Assessment Among Traditional Krajood (Lepironia articulata) Handicraft Workers: A Case Study in Southern Thailand
Previous Article in Special Issue
A Comprehensive Survey on the Societal Aspects of Smart Cities
 
 
Font Type:
Arial Georgia Verdana
Font Size:
Aa Aa Aa
Line Spacing:
Column Width:
Background:
Article

Creating Effective Self-Adaptive Differential Evolution Algorithms to Solve the Discount-Guaranteed Ridesharing Problem Based on a Saying

Department of Computer Science and Information Engineering, Chaoyang University of Technology, Taichung 413310, Taiwan
Appl. Sci. 2025, 15(6), 3144; https://doi.org/10.3390/app15063144
Submission received: 21 January 2025 / Revised: 9 March 2025 / Accepted: 10 March 2025 / Published: 13 March 2025
(This article belongs to the Special Issue Smart City and Informatization, 2nd Edition)

Abstract

:
Sustainable transport is an important trend in smart cities to achieve sustainability development goals. It refers to the use of transport modes with low emissions, energy consumption and negative impacts on the environment. Ridesharing is one important sustainable transport mode to attain the goal of net zero greenhouse gas emissions. The discount-guaranteed ridesharing problem (DGRP) aims to incentivize drivers and riders and promote ridesharing through the guarantee of a discount. However, the computational complexity of the DGRP poses a challenge in the development of effective solvers. In this study, we will study the effectiveness of creating new self-adaptive differential evolution (DE) algorithms based on an old saying to solve the DGRP. Many old sayings still have far-reaching implications today. Some of them influence the organization of management teams for companies and decisions to improve performance and efficiency. Whether an old saying that works effectively for human beings to solve problems can also work for developers to create effective optimization problem solvers in the realm of artificial intelligence is an interesting research question. In our previous study, one self-adaptive algorithm was proposed to solve the DGRP. This study demonstrates how to create a series of self-adaptive algorithms based on the old saying “Two heads are better than one” and validates the effectiveness of this approach based on experiments and comparison with the algorithms proposed previously. The new finding of this study is that the old saying not only works effectively for human beings to solve problems but also works effectively in the creation of new scalable and robust self-adaptive algorithms to solve the DGRP. In other words, the old saying provides a simple and systematic approach to the development of effective optimization problem solvers in artificial intelligence.

1. Introduction

As a transport mode enabling reduction of the number of vehicles in use, energy consumption and emissions of greenhouse gas in cities, shared mobility has drawn researchers’ and practitioners’ attention around the world. Many transport service providers such as Uber [1] and Lyft [2] have implemented shared mobility systems. Shared mobility has led to the creation of a lot of studies in the ridesharing literature [3,4,5]. In the past years, various ridesharing problems with different models were studied. Recently, ridesharing with multiple platforms was considered in [6] to match supply and demand across multiple platforms to improve the overall revenue of platforms through coordination. The study [7] aims to determine efficient routes and schedules for drivers and passengers with flexible pick-up and delivery locations. Motivated by the need to reduce CO2 emissions, a graph-theoretical algorithm-based approach was proposed to achieve higher levels of reduction in CO2 emissions in the study [8]. A review of the challenges and opportunities due to shared mobility is available in [9,10,11].
Financial benefits provide one strong incentive for the prevailing ridesharing model. In the literature, cost savings due to ridesharing make it possible for drivers and riders to reduce their travel cost and enjoy the monetary incentives in shared mobility systems. In the literature, different mechanisms have been developed to incentivize ridesharing drivers and riders. These include cost sharing [12], discounting [13] and cost allocation [14,15]. In [12], a general cost sharing framework to create cost sharing mechanisms with desirable properties and compensation for inconvenience costs due to detours was proposed and validated based on a real case study. In [13], a non-linear mixed integer optimization problem to maximize the profit and offer shared trips to riders with varying price discounts was proposed. The problem was decomposed into two sequential sub-problems to achieve computation efficiency. Results of experiments confirm that the dynamic discount strategy produces substantial economic benefits.
As cost savings are a benefit of shared mobility that provides one financial incentive to riders, many studies focus on maximization of cost savings in shared mobility systems. However, if the amount of cost savings is insufficient to attract riders, riders will not accept the ridesharing mode [14]; proper allocation of cost savings can help improve the acceptability of ridesharing [15]. For this reason, the study [16] formulated a discount-guaranteed ridesharing problem (DGRP) to ensure a minimal discount for drivers and riders in ridesharing systems. Due to the complexity of a DGRP, metaheuristic algorithms have been developed in [16] to find solutions.
Metaheuristic algorithms refer to algorithms that are characterized by imitating natural phenomena or human intelligence and are able to tackle the complexity issue in solving optimization problems. Natural phenomena or human intelligence can be applied to develop effective strategies to guide the process to search solutions efficiently. In the metaheuristics literature, many metaheuristic algorithms were proposed by drawing inspiration from natural phenomena, biological sciences, laws of physics, rules of games and human activities. In the study [17], metaheuristic algorithms are classified into five categories: physics-based, swarm-based, evolutionary-based, game-based and human-based methods. In the remainder of this section, we will first give a brief review of these categories. Then we will focus on the issue to create metaheuristic algorithms based on the saying “Two heads are better than one”.

1.1. Literature Review of Categories of Metaheuristic Algorithms

Simulated annealing and the gravitational search algorithm (GSA) are two examples of metaheuristic algorithms based on physics. Simulated annealing is based on the physical process of annealing in which materials are heated and cooled slowly to decrease defects and minimize the system energy [18]. The GSA [19] is inspired by Newton’s law of gravity and motion. In the GSA, agents are modeled as objects. The performance of an object is measured by its mass. Objects attract each other due to gravity force. Gravity force makes all objects move toward heavier objects. An object with heavier mass corresponds to a better solution.
Swarm-based metaheuristic algorithms are developed based on the concept of swarm intelligence, which refers to the collective learning and decision-making behavior of self-organized entities in natural or artificial systems. Well-known examples of swarm-based metaheuristic algorithms include particle swarm optimization (PSO) [20], the firefly algorithm (FA) [21], artificial bee colony (ABC) [22], ant colony optimization (ACO) [23], etc. ACO is a metaheuristic inspired by the foraging behavior of ant colonies. A variety of strategies are used by ants to forage for food. One strategy is to leave pheromones to mark their path to food. Ants mark some favorable paths by depositing pheromones on the ground. These favorable paths are followed by other ants of the colony to facilitate selection of the shortest route with the help of the pheromone. PSO is an approach inspired by the social behavior and movement of organisms in a bird flock. The standard PSO algorithm is initiated by creating a population of solutions called particles. Each particle adjusts its position and velocity according to its individual personal best solution and the global best solution to improve the solution. If a particle finds an improved solution that outperforms the current personal best solution, the improved solution will replace the personal best solution. Similarly, if the improved solution found by the particle outperforms the current global best solution, the improved solution will replace the global best solution. The PSO algorithm repeats the above process until the termination condition is satisfied. The flashing characteristics of fireflies inspired the creation of the FA: one firefly attracts another one according to attractiveness proportional to the brightness which decreases as their distance increases, where the brightness of a firefly depends on the objective function. A firefly will move randomly in case it cannot find a brighter one.
Evolutionary-based metaheuristic algorithms mimic the phenomena of biological evolution which is similar to the natural selection theory introduced by Darwin. They borrow the reproduction, mutation, recombination and selection mechanisms from biological evolution to evolve the potential solutions. The genetic algorithm (GA) [24] and differential evolution (DE) [25] are two widely used evolutionary algorithms. The GA starts with a population of candidate solutions (called individuals). Evolution is an iterative process and each iteration is called a generation. For every individual, its fitness is evaluated based on the fitness function of the optimization problem. The population in a generation evolves through applying a combination of genetic operations, including crossover and mutation to form a new generation, to attempt to create better solutions. The GA selects the most fit individuals stochastically from the current population to play the role of parents. An individual in the next generation may be created by applying crossover operations to combine two parents or applying mutation rules to individual parents. The GA terminates either after a maximum number of generations has been executed or the solution quality has reached a satisfactory fitness level. The DE algorithm includes the operations of initialization, mutation, crossover and selection to iteratively improve the solution quality. In the initialization step, a population of individuals is generated. In the mutation step, a mutation strategy is applied to generate a mutant vector for each individual in the population. The crossover step generates a trial vector based on the mutant vector and the original individual. In the selection step, the trial vector will replace the original individual if it is better than the original individual.
Game-based metaheuristic algorithms simulate the behavior of different entities such as players, coaches and other individuals in games and the behavior of different entities is governed by the underlying rules of the games. Football game-based optimization (FGBO) [26] and the volleyball premier league (VPL) algorithm [27] are two well-known examples of game-based metaheuristic algorithms. In FGBO, a population of clubs is generated initially, where each club consists of a number of players. The algorithm calculates the power of each club and determines the best club. It then iteratively follows a procedure with four phases: (a) league holding, (b) player transfer, (c) practice and (d) promotion and relegation to update the status of the clubs and improve the solution quality until the termination condition is satisfied. The VPL algorithm is inspired by the coaching process during a volleyball match. In a volleyball game, a team consists of players, substitutes and the coach. The formulation of a team includes several active players in the active part and standby players in the passive part. A coach decides whether to replace an active player with a standby player in the passive part. The algorithm determines the winning team and the losing team based on the power index and applies different strategies to update the best team. The algorithm removes the k-worst teams, adds new teams to the league, applies the transfer process and updates the best team iteratively until the termination condition is satisfied.
Human-based metaheuristic algorithms are developed based on inspiration from human activities. Examples of human-based metaheuristic algorithms include teaching–learning-based optimization (TLBO) [28], poor and rich optimization (PRO) [29], human mental search (HMS) [30] and driving training-based optimization (DTBO) [17]. TLBO is a population-based algorithm and consists of two phases: Teacher Phase and Learner Phase. The algorithm learns from the teacher in the Teacher Phase whereas it learns through the interaction between learners in the Learner Phase to improve the quality of solutions. PRO is a multi-population-based algorithm. It is inspired by the mechanisms that are used by the poor and the rich to improve their economic situation. Initially, there are two populations: a rich population and a poor population. Each individual in the poor population tries to improve his/her status and reduce the gap between the rich population and poor population by learning from an individual in the rich population. Each individual in the rich population tries to increase the gap between the rich population and poor population by observing them and acquiring wealth. The rich population and poor population are updated after all the individuals have taken actions to try to improve their status. HMS is inspired by strategies to explore the bid space in online auctions. It simulates human behavior in online auctions and consists of three steps: (1) the mental search for exploring around each solution, (2) grouping bids to determine a promising region and (3) moving the bids toward the best strategy. DTBO is inspired by the learning process in a driving school. It is divided into three phases: (1) training by the driving instructor, (2) patterning of students from instructor skills and (3) practice. The best individuals selected from the population play the role of driving instructors and the rest play the role of learner drivers. In the training in the driving instructor phase, a learner driver must choose the driving instructor and learns from that instructor. In the patterning of students from instructor skills phase, each learner driver imitates the instructor to attempt to improve his/her driving skills. In the practice phase, a learner driver tries to achieve his/her best skills through practice.

1.2. Creating Metaheuristic Algorithms Based on a Saying

Besides human activities, sayings or proverbs created by human beings still have far-reaching implications today. An example is the saying “Two heads are better than one”. Although it is believed that the origin of the saying “Two heads are better than one” could be traced back to 1390, it was first recorded in John Heywood’s Book of Proverbs in 1546. This saying can be applied to create new ideas and solve problems. An example is “brainstorming”, which is a group problem-solving method in which each member of the group suggests as many ideas as possible based on his/her expertise and can be regarded as one way to apply this saying to create new ideas or solution methods. Extended results of this saying can be found in the study of [31], which demonstrates that a multi-agent system has the potential to improve the generation of scientific ideas. This saying also influences the organization of management teams for companies as well as the way to effectively make decisions to improve performance and efficiency. An example is multi-agent reinforcement learning [32], where complex tasks that are difficult for a single agent to handle can be completed by a set of agents cooperatively.
The saying “Two heads are better than one” can be interpreted as “two people working together are better than one working alone” [33]. In [33], it is shown that pair programming improves software quality. This saying can also be used to describe the situation in which performing tasks using two systems is better than using only one [34]. This saying may also be applied to confirm the correctness of a conclusion by assigning two teams to perform experiments. For example, in [35], two studies with different data collection processes, databases and data analysis methods were performed to confirm that the trends detected are in general agreement. Finally, this saying may be interpreted as solving a complex problem with two stages may be better than one single stage by dividing it into multiple sub-problems. For example, a divide-and-conquer strategy was used in [36] to divide the original complex problem into multiple sub-problems to reduce complexity.
Whether the above old saying that works effectively for human beings to solve problems can also work for developers to create effective optimization problem solvers in the realm of artificial intelligence is an interesting research question. One way to apply this saying to create an effective optimization problem solver for solving the ridesharing problem is to combine two existing metaheuristic approaches. In the study [37], the author showed how to apply the above saying to create a more efficient metaheuristic algorithm for solving the ridesharing problem based on the hybridization of FA with PSO. However, no self-adaptation mechanism is applied in the approach of [37]. In the study [38], the author demonstrated how to apply the above saying to create new self-adaptive metaheuristic algorithms for solving the ridesharing problem with trust requirements based on the use of the two mutation strategies arbitrarily selected from a set of existing standard DE algorithms. The study [38] first describes how this saying can be applied by humans to solve a problem. The process consists of two phases: the performance assessment phase and optimization phase. The performance assessment phase aims to assess the performance of the two heads for solving the problem. The optimization phase aims to solve the optimization problem based on the performance of the two heads. By mimicking the process to apply this saying, the developed self-adaptive metaheuristic algorithms also consist of two phases: the performance assessment phase and optimization phase. The study [38] validates the effectiveness of applying this saying to develop two-head self-adaptive metaheuristic algorithms. The results indicate that the performance of each two-head self-adaptive metaheuristic algorithm outperforms that of two corresponding “single-head” metaheuristic algorithms. This confirms that this saying provides a simple yet effective approach to the development of effective self-adaptive algorithms for ridesharing systems with trust requirements. However, the effectiveness of the self-adaptive metaheuristic algorithms developed based on this saying might depend on the problem. Although this approach has been validated for the ridesharing problem with trust requirements, whether it can work effectively for other problems is an interesting research question.
In this study, we loosely define a two-head self-adaptive metaheuristic algorithm with two mutation strategies developed by applying “Two heads are better than one” as effective if it outperforms two single-head mutation strategy metaheuristic algorithms. Based on this definition, we will validate whether the approach based on this saying is effective for another problem—the DGRP. To achieve this goal, several two-head self-adaptive metaheuristic algorithms with two mutation strategies have been developed based on this saying. Each two-head self-adaptive metaheuristic algorithm developed based on this saying takes advantage of the two mutation strategies according to their success rates in improving the solutions. A mutation strategy (head) with a higher success rate in improving the solutions will have a higher probability of being selected to solve the given problem. On the contrary, a mutation strategy (head) with a lower success rate in improving the solutions will have a lower probability of being selected to solve the given problem. The two-head self-adaptive metaheuristic algorithm works by applying the two mutation strategies according to their historical success rates to improve the solutions. The two-head self-adaptive metaheuristic algorithm iterates in the optimization phase until the termination criteria are satisfied. An important issue is the design of the fitness function for the two-head self-adaptive metaheuristic algorithms. We adopt the approach of [39] to take into account the objective function and constraints in the fitness function to effectively move toward the feasible region through biasing feasible solutions against infeasible ones by penalizing the violation of constraints.
To fully validate the effectiveness of the two-head self-adaptive metaheuristic algorithms developed for the DGRP, we considered a set of four mutation strategies and arbitrarily selected any combination of two mutation strategies from the set to develop the two-head self-adaptive metaheuristic algorithms. This results in six two-head self-adaptive metaheuristic algorithms. To validate the effectiveness of each of the six two-head self-adaptive metaheuristic algorithms, we conducted experiments to compare the performance of each two-head self-adaptive metaheuristic algorithm with the corresponding two single-head metaheuristic algorithms. We analyzed the results to illustrate the effectiveness of applying the above saying to develop two-head self-adaptive metaheuristic algorithms.
The contributions of this study include: (1) the development of two-head self-adaptive metaheuristic algorithms based on a saying, (2) the design and conduction of experiments to validate whether each two-head self-adaptive metaheuristic algorithm outperforms the two single-head metaheuristic algorithms and (3) the analysis of experimental results to uncover the advantage of two-head self-adaptive metaheuristic algorithms. The innovation of this paper is the development of a simple yet effective approach to create self-adaptive algorithms based on the saying “Two heads are better than one”. The simple approach first arbitrarily selects two strategies and calculates their success rates in improving the performance to determine the strategy to be applied in the next iteration. The success rates are regularly updated to reflect the performance of the two selected strategies. Preliminary results indicate that two-head DE algorithms created based on the proposed approach outperform the corresponding two single-head DE algorithms and the PSO algorithm.
This paper focuses on the development of solution algorithms for the class of the DGRP. The application of ridesharing algorithms to implement ridesharing decision support systems has been studied in detail in our earlier paper [5], where the technical details such as the procedures to generate bids for drivers and passengers are presented. Although the DGRP addressed in this study is different from the one in [5], the way to implement a discount-guaranteed ridesharing system is similar to the one adopted in [5]. Please refer to [5] for further information regarding implementation of a ridesharing system.
The rest of this paper is structured as follows. A brief review of the DGRP will be given in Section 2. Design of the fitness function will be described in Section 3. The development of self-adaptive algorithms based on “Two heads are better than one” will be presented in Section 4. The experiments used to validate the effectiveness of the self-adaptive algorithms developed based on “two heads are better than one” will be described in detail and the results of experiments will be analyzed and reported in Section 5. In Section 6, the results of experiments will be discussed. The conclusions will be drawn in Section 7.

2. A Review of Decision Model for the DGRP

As this study aims to validate the effectiveness of using a saying to develop solvers for the DGRP, a brief review of the decision model for the problem is given in this section. Table 1 summarizes the notations used in this paper.
The DGRP follows the double auction paradigm in which drivers and passengers submit bids based on their transport requirements subject to several constraints. The problem is to determine the winning bids of drivers and passengers to optimize the overall cost savings while meeting the transport requirements of drivers and passengers and satisfying the constraints.
In a DGRP, passengers play the roles of buyers and drivers play the roles of sellers. Consider a DGRP with P passengers and D drivers. Let p { 1 , 2 , 3 , . P } be the index of a passenger and let d { 1 , 2 , 3 , . , D } be the index of a driver. The bid of passenger p is denoted by P B p = ( s p 1 1 , s p 2 1 , s p 3 1 , , s p P 1 , s p 1 2 , s p 2 2 , s p 3 2 , s p P 2 , f p ) , where s p k 1 is the number of seats requested by p at location k , s p k 2 is the number of seats released at drop-off location k of passenger p and f p is the passenger p ’s original cost when he/she travels alone. A driver may submit multiple bids. The j t h bid of driver d { 1 , 2 , , D } is represented by D B d j = ( q d j 1 1 , q d j 2 1 , q d j 3 1 , , q d j P 1 , q d j 1 2 , q d j 2 2 , q d j 3 2 , , q d j P 2 , o d j , c d j ) , where q d j k 1 is the no. of seats allocated to pick-up location k , q d j k 2 is the no. of seats released at the drop-off location k , o d j is the original cost of the driver when he/she travels alone and c d j is the travel cost of the bid.
To formulate the DGRP as an optimization problem, decision variables and an objective function must be defined. In our problem formulation, drivers’ decision variables are denoted by x d j , where d { 1 , 2 , , D } and j { 1 , 2 , , J d } , x d j is binary, D B d j is a winning bid if x d j is equal to one, D B d j is not a winning bid if x d j is equal to zero; passengers’ decision variables are denoted by y p , where p { 1 , 2 , 3 , . P } , y p is binary, P B p is a winning bid if y p is equal to one, P B p is not a winning bid if y p is equal to zero. The objective function is a function of x d j and y p , where d { 1 , 2 , , D } , j { 1 , 2 , , J d } and p { 1 , 2 , 3 , . P } , used to characterize the goal to be achieved. Cost savings have been widely used as a performance index for ridesharing systems. For the DGRP, the cost savings objective function is described by F ( x , y ) = p = 1 P y p f p + d = 1 D j = 1 J d x d j o d j d = 1 D j = 1 J d x d j c d j , where the first term p = 1 P y p f p represents the original travel cost of all winning passengers, the second term d = 1 D j = 1 J d x d j o d j represents the original travel cost of all winning drivers, and   d = 1 D j = 1 J d x d j c d j represents the total travel cost of all winning passengers and winning drivers with ridesharing. The problem formulation aims to minimize the cost savings objective function F ( x , y ) to realize the benefits of ridesharing.
In the DGRP, the demand and supply of seats must be balanced and this is described by Constraints (2) and (3). For the ridesharing participants to benefit from ridesharing, the cost savings must be non-negative and this is described by Constraint (4). Constraint (5) states that only one bid can be a winning bid for each driver even though a driver may submit multiple bids.
In the DGRP, the discount for drivers and passengers must be satisfied to provide an incentive for drivers and passengers to accept ridesharing. This is described by Constraints (6) and (7). The decision variables of all drivers x d j { 0 , 1 }       d { 1 , , D } j { 1 , , J d } and the decision variables of all passengers y p { 0 , 1 }       p { 1 , , P } are binary. This is described by (8).
max x , y F ( x , y )
d = 1 D j = 1 J d x d j q d j k 1 = y p s p k 1       p { 1 , 2 , 3 , . P }   k { 1 , 2 , 3 , . P }  
d = 1 D j = 1 J d x d j q d j k 2 = y p s p k 2       p { 1 , 2 , 3 , . P }   k { 1 , 2 , 3 , . P }  
p = 1 P y p f p + d = 1 D j = 1 J d x d j o d j d = 1 D j = 1 J d x d j c d j
j = 1 J d x d j 1 d { 1 , , D }
x d j ( F d j ( x , y ) p Γ d j y p c f p d j + x d j c d j   r D ) 0
y p ( F d j ( x , y ) p Γ d j y p c f p d j + x d j c d j   r P ) 0
x d j { 0 , 1 } d { 1 , , D } j { 1 , , J d }   and   y p { 0 , 1 } p { 1 , 2 , 3 , . P }

3. Design of the Fitness Function

The metaheuristic algorithms to be developed based on “Two heads are better than one” follow the paradigm of evolutionary algorithms. Therefore, a fitness function that plays the role of an indicator to provide information on the solution quality in the search processes is required. The fitness function must take into account the objective function as well as the constraints to characterize the quality of a solution quantitatively. In the literature, a lot of approaches based on penalty methods have been proposed to improve the solution quality and reduce the violation of constraints. The fitness function used in this study is biased against infeasible solutions [39].
The fitness function plays a pivotal role in guiding the evolutionary processes through considering the factors that are able to improve the value of the objective function and feasibility of the solution found. Our approach to the design of the fitness function takes into account these factors by adopting the proven technology from [39] that works for a wide variety of optimization problems to effectively move toward the feasible region through biasing feasible solutions against infeasible ones by penalizing the violation of constraints and replacing the current solution with a better solution. The effectiveness of this design approach has been verified in our previous works, e.g., [5,10,31,32].
To describe the fitness function, let S f be the set of feasible solutions in the current population. Then the objective function value of the worst feasible solution in S f is S f min = min ( x , y ) S f F ( x , y ) .
To penalize violation of Constraints (2) and (3), the terms d = 1 D j = 1 J d x d j q d j k 1 y p s p k 1 and d = 1 D j = 1 J d x d j q d j k 2 y p s p k 2 are introduced, respectively.
To penalize violation of Constraint (4), the term min ( p = 1 P y p f p d = 1 D j = 1 J d x d j ( c d j o d j ) , 0.0 ) is introduced.
To penalize violation of Constraint (5), the term d = 1 D j = 1 J d min ( 1 j = 1 J d x d j , 0.0 ) is introduced.
To penalize violation of Constraints (6) and (7), the terms d = 1 D j = 1 J d x d j min ( ( F d j ( x , y ) p Γ d j y p c f p d j + x d j c d j   ) r D , 0.0 ) and p = 1 P y p min ( ( F d j ( x , y ) p Γ d j y p c f p d j + x d j c d j   ) r P , 0.0 ) are introduced, respectively.
The fitness function F 1 ( x , y ) used in this paper is defined in (9):
F 1 ( x , y ) = F ( x , y ) i f ( x , y ) i s f e a s i b l e U ( x , y ) o t h e r w i s e ,
where U ( x , y ) is defined in (10).
U ( x , y ) = S f min p = 1 P k = 1 K ( d = 1 D j = 1 J d x d j q d j k \ 1 y p s p k 1 + d = 1 D j = 1 J d x d j q d j k 2 y p s p k 2 ) + min ( p = 1 P y p f p d = 1 D j = 1 J d x d j ( c d j o d j ) , 0.0 ) + d = 1 D j = 1 J d min ( 1 j = 1 J d x d j , 0.0 ) + d = 1 D j = 1 J d x d j min ( ( F d j ( x , y ) p Γ d j y p c f p d j + x d j c d j   ) r D , 0.0 ) + p = 1 P y p min ( ( F d j ( x , y ) p Γ d j y p c f p d j + x d j c d j   ) r P , 0.0 )
The original DE approach was proposed for problems with continuous solution space. As the decision variables of the DGRP are discrete, the solution space is discrete. The approach adopted in this study is to use a function to convert a solution in continuous solution space to a discrete solution systematically. This approach can be applied to tailor the existing solvers for continuous optimization problems to work with discrete optimization problems. Definition of the function to convert a continuous solution to a discrete one is shown in Table 2 as follows.

4. Creation of Two-Head Self-Adaptive Metaheuristic Algorithms

To study whether the saying “Two heads are better than one” can be applied to create effective metaheuristic algorithms, we define a two-head metaheuristic algorithm as a metaheuristic algorithm that uses two different mutation strategies. We define a single-head metaheuristic algorithm as a metaheuristic algorithm that uses only one mutation strategy. In this paper, we consider a set of mutation strategies and arbitrarily select any combination of two mutation strategies from the set to develop two-head self-adaptive metaheuristic algorithms. To validate whether the above saying can be applied to create effective metaheuristic algorithms, we conducted experiments to compare the performance of each two-head self-adaptive metaheuristic algorithm with the corresponding two single-head metaheuristic algorithms. We analyzed the results to illustrate the effectiveness of applying the above saying to develop self-adaptive metaheuristic algorithms.
The two-head self-adaptive metaheuristic algorithms developed by applying the above saying are created by selecting two mutation strategies from a set of four mutation strategies. Table 3 lists the four DE mutation strategies defined by (11)–(14) used in this study to create self-adaptive metaheuristic algorithms. As a self-adaptive metaheuristic algorithm needs only two distinct DE mutation strategies, we arbitrarily select any two of the four DE mutation strategies in Table 3. Therefore, we can generate 4!/(2!2!)=6 two-head self-adaptive metaheuristic algorithms.
To describe the mutation function in DE, we introduce several notations. The problem dimension is denoted by N in this paper. The population size, i.e., the number of individuals in the population, is denoted by N P . In the t -th generation, the i -th individual in the population is denoted by z t i = ( z t i n ) , where i { 1 , 2 , , N P } and n { 1 , 2 , , N } . A DE algorithm typically initializes a population of individuals randomly and attempts to improve the quality of individuals by iteratively executing the operations of mutation, crossover and selection. The mutation operation generates a mutant vector v ( t + 1 ) i = ( v ( t + 1 ) i n ) for the i -th individual in the t -th generation based on a mutation strategy. For example, suppose DE mutation strategy s in Table 3 is used. The mutation function μ s will be used to calculate the mutant vector, where r 1 , r 2 and r 3 , r 4 and r 5 are distinct randomly integers generated between 1 and N P .
The flowchart of a two-head self-adaptive metaheuristic algorithm with two DE mutation strategies s 1 and s 2 is shown in Figure 1. The parameters needed for the two-head self-adaptive metaheuristic algorithm include the mutation strategy parameters s 1 and s 2 , the probability related to generating a scale factor and selecting a mutation strategy, f p , the crossover probability, c r , the learning period parameters, L P , the population size parameter, N P , and the number of generations, G , of the evolution processes. The parameters, s 1 , s 2 , L P , f p and c r are specific to the two-head self-adaptive metaheuristic algorithm whereas the parameters N P and G are commonly used algorithmic parameters in evolutionary algorithms. The mutation strategy parameters s 1 and s 2 specify two candidate mutation strategies to be used to create a mutant vector, which is used for creation of a trial vector. The probability f p influences the probability distribution to be used to generate the scale factor. The probability f p also influences the mutation strategy to be used to create a mutant vector. The meaning of the crossover probability, c r , is the same as the one used in the standard DE approach. The difference is that the crossover probability, c r , in the self-adaptive metaheuristic algorithm will be updated regularly to improve the performance. The learning period parameter, L P , specifies the number of generations to update f p and c r .
The overall operation of the proposed algorithm is divided into three steps: Step 1: initialization of parameters, Step 2: initialization of the population and Step 3: evolution of the population. In Step 1, the following parameters are initialized: the mutation strategy parameters s 1 and s 2 , the probability related to generating a scale factor and selecting a mutation strategy, f p , the crossover probability, c r , the learning period parameters, L P , the population size parameter, N P , and the number of generations, G , of the evolution processes. In Step 2, the initial population is created randomly. In Step 3, the individuals in the population evolve through the generation of scale factors, calculation of mutant vectors, calculation of trial vectors, selection of trial vectors and updating the success counter, the failure counter, success rate and related parameters.
After setting the above parameters, the algorithm proceeds to Step 2 to initialize the population and compute the fitness function value of each individual in the population. Then the algorithm executes the main nested loops in Step 3 to evolve the solutions. In Step 3-1, the algorithm calculates the scale factor by generating a random number to determine the probability distribution used to generate the scale factor. If the random number is less than f p , the scale factor is generated based on a Gaussian distribution. Otherwise, the scale factor is generated based on a uniform distribution. In Step 3-2, the algorithm calculates the mutant vector based on a random number generated from a uniform distribution to determine the mutation strategy to be used. If the random number is less than f p , the mutation strategy s 1 will be used to compute the mutant vector. Otherwise, the mutation strategy s 2 will be used to calculate the mutant vector. In Step 3-3, the algorithm calculates the trial vector u g i l based on a random number generated from a uniform distribution and the crossover probability, c r . If the random number is less than c r , the algorithm sets the value of the element u g i l of the trial vector to the value of the corresponding element v g i l of the mutant vector. Otherwise, the algorithm sets the value of the element u g i l of the trial vector to the value of the corresponding element z g i l of the individual. In Step 3-4, the algorithm determines whether to select the trial vector. Step 3-4 also updates the success counter and the failure counter as needed. In Step 3-5, the algorithm updates the success rate, f p , and c r at the end of each learning period.
The algorithm updates the crossover probability c r by the average of the historical crossover probability value of c r i that has successfully improved performance. The historical crossover probability value of c r i that has successfully improved performance is stored in a list L in Step 3-4. Therefore, the crossover probability c r is updated by c r = k { 1 , 2 , , L } L ( k ) L .
The algorithm will exit if the termination condition is satisfied. The pseudo code of a two-head self-adaptive metaheuristic algorithm with two DE mutation strategies is shown in Algorithm 1.
Algorithm 1: Discrete Two-head Self-Adaptive Metaheuristic Algorithm
Step 1: Set parameters
Step   1-1 :   Set   two   strategies ,   s 1   and   s 2
Step 1-2: Set learning period L P ,   probability   f p = 0.5   and   crossover   probability   c r = 0.5
Step 1-3: Set population size N P and the number of generations G
Step 2: Generate a population randomly and evaluate fitness function values of individuals in the population
Step 2-1: Generate   N P individuals randomly in the population
Step 2-2: Compute each individual’s fitness function value
Step 3: Improve solutions through evolution
                            For   g = 1 t o G
                                    For   i = 1 t o N P
Step 3-1: Create   the   scale   factor   F i
                Create a random number r   from   U ( 0 , 1 )
                                    F i = r 1 , w h e r e r 1 i s a G a u s s i a n r a n d o m n u m b e r w i t h N ( μ , σ 1 2 ) i f r < f p r 2 , w h e r e r 2 i s a u n i f o r m r a n d o m n u m b e r s a m p l e d f r o m U ( 0 , 1 ) o t h e r w i s e
Step 3-2: Compute   the   mutant   vector   v g i
                Create a random number r   from   U ( 0 , 1 )
                                    s = s 1 i f r < f p s 2 o t h e r w i s e
                For   n 1, 2, ..., N
                         v ( g + 1 ) i n = μ s ,   where   μ s is defined in Table 3.
                End For
Step 3-3: Create   a   trial   vector   u g i
                 Create   a   Gaussian   random   number   c r i   with   N ( c r , σ 2 2 )
                 For   l 1, 2, ..., N
                      Create a uniform random number r   from   U ( 0 , 1 )
                       u g i l = v g i l i f r < c r i z g i l o t h e r w i s e
                       u ¯ g i l C o n v e r t 2 B i n a r y ( u g i l )
                End For
Step 3-4: Update the success counter or failure counter
                 If   H 1 ( u ¯ g i ) H 1 ( z g i )
                       z ( g + 1 ) i = u g i
                       L L { c r i }
                       S s = S s + 1
                Else
                       U s = U s + 1
                End If
               End For
Step 3-5: Update   the   probability   f p   and   the   crossover   probability   c r
                 If   g M O D L P = 0
                       w 1 = S s 1 ( S s 1 + U s 1 )
                       w 2 = S s 2 ( S s 2 + U s 2 )
                       f p = w 1 w 1 + w 2
                       c r = k { 1 , 2 , , L } L ( k ) L
                End If
             End For

5. Results

The goal of this study is to validate whether the old saying “Two heads are better than one” holds for the two-head self-adaptive differential evolution algorithms created to solve the DGRP. That is, each two-head self-adaptive differential evolution algorithm is better than the corresponding two single-head differential evolution algorithms for solving the DGRP. In this study, it is assumed that the two distinct strategies used to create a two-head self-adaptive differential evolution algorithm are arbitrarily selected from the four DE mutation strategies defined in Table 3. Therefore, we can generate 6(=4!/(2!2!) two-head self-adaptive metaheuristic algorithms. A series of experiments were designed to validate the aforementioned statement that the above saying holds for the two-head self-adaptive differential evolution algorithms created to solve the DGRP.
To describe the experiments conducted in this study, let S = {1,2,3,4} denote the set of four mutation strategies defined in Table 3. Let us refer to the two-head self-adaptive metaheuristic algorithm created based on strategies s 1 and s 2 as S a N S D E ( s 1 , s 2 ) , where s 1 S , s 2 S and s 1 s 2 . To validate that the above saying holds, we must show that the performance of S a N S D E ( s 1 , s 2 ) is better than those of D E ( s 1 ) and D E ( s 2 ) . We conducted the experiments by applying S a N S D E ( s 1 , s 2 ) , D E ( s 1 ) and D E ( s 2 ) to solve the same set of test cases. Table 4 lists all the experiments conducted in this study. In the first experiment, we compared S a N S D E ( 1 , 2 ) with D E   ( 1 ) and D E   ( 2 ) . In the second experiment, we compared S a N S D E ( 1 , 3 ) with D E   ( 1 ) and D E   ( 3 ) . In the third experiment, we compared S a N S D E ( 1 , 4 ) with D E   ( 1 ) and D E   ( 4 ) . Similarly, we compared S a N S D E ( 2 , 3 ) with D E   ( 2 ) and D E   ( 3 ) in the fourth experiment and compared S a N S D E ( 2 , 4 ) with D E   ( 2 ) and D E   ( 4 ) in the fifth experiment. Finally, we compared S a N S D E ( 3 , 4 ) with D E   ( 3 ) and D E   ( 4 ) .
We introduce preparation of the data and the parameters used by the algorithms in the experiments as follows. The data of the test cases used in the experiments were generated randomly in a region in Taiwan. The method to generate data for test cases is general and can be applied to generate test cases for other regions in the world. The locations corresponding to the randomly generated data will be restricted to the specified region. A large region represents a large geographic area whereas a small region represents a small geographic area. Data generated with a large region are suitable for simulating long-distance ride sharing whereas data generated with a small region are suitable for short-distance sharing. The input data of each test case include the itineraries of drivers and the itineraries of passengers. An itinerary of a passenger includes an identification number, the location to pick up the passenger, the location to drop the passenger off, the original transport cost, the number of seats requested, the earliest departure time and the latest arrival time. An itinerary of a driver includes an identification number, the start location of the driver, the destination location of the driver, the earliest departure time and the latest arrival time. The bids of drivers and the bids of passengers were generated by applying the Bid Generation Algorithm for Drivers and the Bid Generation Procedure for Passengers in [5].
By applying the above procedures, several test cases were generated and used in each series of experiments. Case 1 through Case 10 are small test cases whereas Case 11 through Case 14 are larger test cases. These test cases are available for download via the links in [40]. The format of each file is also defined in a text file accessible via the link. The parameters used in S a N S D E ( s 1 , s 2 ) and D E ( s 1 ) and D E ( s 2 ) for Case 1 through Case 10 are different from the ones for Case 11 through Case 14.
The design of the test cases generated in [40] is based on the approach of incremental tests described in the report “Evaluation Criteria for the CEC 2017 Competition and Special Session on Constrained Single Objective Real-Parameter Optimization” published in [41], which states that four instances (problems with dimensions 10, 30, 50 and 100) of each constrained optimization problem must be solved by an algorithm to test its performance. Although the DGRP is different from the 28 constrained optimization problems in the CEC 2017 Competition and the decision variables are discrete values instead of real values, the way to test the performance of an algorithm for solving the DGRP can also be developed based on the incremental tests used in the CEC 2017 Competition. In this study, the problem dimensions of the 14 test cases for the DGRP are increased from 14 to 100. The dimensions of the 14 test cases in this study are 14, 16, 17, 18, 20, 22, 24, 26, 28, 30, 40, 60, 80 and 100. Therefore, the method to evaluate the algorithms included in this study is similar to the one adopted in the CEC 2017 Competition for Large-Scale Global Optimization (LSGO) and covers more instances of the DGRP with various dimensions than the four instances specified in the LSGO report.
For the parameters used by the algorithms in the experiments, each algorithm needs a number of algorithmic parameters to work properly. Some of the algorithmic parameters are common to all the algorithms related to this study whereas other parameters are specific to a particular algorithm. The values of the algorithmic parameters used by each algorithm in the experiments are listed in Table 5.
Based on the parameters described above, we ran each algorithm ten times. The results of Experiment 1 obtained by applying the two-head self-adaptive algorithm S a N S D E ( 1 , 2 ) , D E   ( 1 ) and D E   ( 2 ) to the test cases are summarized in Table 6. Figure 2 shows the results in a bar chart. The results show that the performance of S a N S D E ( 1 , 2 ) is either better than or the same as D E   ( 1 ) and D E   ( 2 ) . Note that the performance of S a N S D E ( 1 , 2 ) is either better than or the same as D E   ( 2 ) for all test cases. In particular, S a N S D E ( 1 , 2 ) significantly outperforms D E   ( 1 ) and D E   ( 2 ) for larger cases, Case 12, Case 13 and Case 14. The results show that the saying is valid for S a N S D E ( 1 , 2 ) .
The results of Experiment 2 obtained by applying the two-head self-adaptive algorithm S a N S D E ( 1 , 3 ) , D E   ( 1 ) and D E   ( 3 ) to the test cases are summarized in Table 7. Figure 3 shows the results in a bar chart. The results show that the performance of S a N S D E ( 1 , 3 ) is either better than or the same as that of D E   ( 1 ) . The results show that the performance of S a N S D E ( 1 , 3 ) is either better than or the same as that of D E   ( 3 ) . In particular, S a N S D E ( 1 , 3 ) significantly outperforms D E   ( 1 ) and D E   ( 3 ) for larger cases, Case 12, Case 13 and Case 14. The results show that the saying is valid for S a N S D E ( 1 , 3 ) .
The results of Experiment 3 obtained by applying the two-head self-adaptive algorithm S a N S D E ( 1 , 4 ) , D E   ( 1 ) and D E   ( 4 ) to the test cases are summarized in Table 8. Figure 4 shows the results in a bar chart. The results show that the performance of S a N S D E ( 1 , 4 ) is either better than or the same as that of D E   ( 1 ) . The results show that the performance of S a N S D E ( 1 , 4 ) is either better than or the same as that of D E   ( 4 ) . In particular, S a N S D E ( 1 , 4 ) significantly outperforms D E   ( 1 ) and D E   ( 3 ) for larger cases, Case 12, Case 13 and Case 14. The results show that the saying is valid for S a N S D E ( 1 , 4 ) .
The results of Experiment 4 obtained by applying the two-head self-adaptive algorithm S a N S D E ( 2 , 3 ) , D E   ( 2 ) and D E   ( 3 ) to the test cases are summarized in Table 9. Figure 5 shows the results in a bar chart. The results show that the performance of S a N S D E ( 2 , 3 ) is either better than or the same as that of D E   ( 2 ) . The results show that the performance of S a N S D E ( 2 , 3 ) is either better than or the same as that of D E   ( 3 ) . In particular, S a N S D E ( 2 , 3 ) significantly outperforms D E   ( 2 ) and D E   ( 3 ) for larger cases, Case 12, Case 13 and Case 14. The results show that the saying is valid for S a N S D E ( 2 , 3 ) .
The results of Experiment 5 obtained by applying the two-head self-adaptive algorithm S a N S D E ( 2 , 4 ) , D E   ( 2 ) and D E   ( 4 ) to the test cases are summarized in Table 10. Figure 6 shows the results in a bar chart. The results show that the performance of S a N S D E ( 2 , 4 ) is either better than or the same as that of D E   ( 2 ) . The results show that the performance of S a N S D E ( 2 , 4 ) is either better than or the same as that of D E   ( 4 ) . In particular, S a N S D E ( 2 , 4 ) significantly outperforms D E   ( 2 ) and D E   ( 4 ) for larger cases, Case 12, Case 13 and Case 14. The results show that the saying is valid for S a N S D E ( 2 , 4 ) .
The results of Experiment 6 obtained by applying the two-head self-adaptive algorithm S a N S D E ( 3 , 4 ) , D E   ( 3 ) and D E   ( 4 ) to the test cases are summarized in Table 11. Figure 7 shows the results in a bar chart. The results show that the performance of S a N S D E ( 3 , 4 ) is either better than or the same as that of D E   ( 3 ) . The results show that the performance of S a N S D E ( 3 , 4 ) is either better than or the same as that of D E   ( 4 ) . In particular, S a N S D E ( 3 , 4 ) significantly outperforms D E   ( 3 ) and D E   ( 4 ) for larger cases, Case 12, Case 13 and Case 14. The results show that the saying is valid for S a N S D E ( 3 , 4 ) .
Table 12 shows the average fitness values and average number of generations for S a N S D E ( 1 , 2 ) , S a N S D E ( 1 , 3 ) , S a N S D E ( 1 , 4 ) , S a N S D E ( 2 , 3 ) , S a N S D E ( 2 , 4 ) and S a N S D E ( 3 , 4 ) . Figure 8 shows the results in a bar chart. Note that these two-head algorithms generate the same average fitness values for all test cases with the exception of Case 13 and Case 14.
In addition to comparing the two-head self-adaptive algorithms with the single-head DE algorithms, we also compared the two-head self-adaptive algorithms with the P S O algorithm. The results obtained by P S O are shown in Table 12 for comparing S a N S D E ( 1 , 2 ) , S a N S D E ( 1 , 3 ) , S a N S D E ( 1 , 4 ) , S a N S D E ( 2 , 3 ) , S a N S D E ( 2 , 4 ) , S a N S D E ( 3 , 4 ) with P S O . The results show that S a N S D E ( 1 , 2 ) , S a N S D E ( 1 , 3 ) , S a N S D E ( 1 , 4 ) , S a N S D E ( 2 , 3 ) , S a N S D E ( 2 , 4 ) , S a N S D E ( 3 , 4 ) outperform P S O in terms of average fitness values for test Case 3 through test Case 14 and the average fitness values for test Case 1 and test Case 2 are the same for all of these algorithms. For convergence speed, S a N S D E ( 1 , 2 ) , S a N S D E ( 1 , 3 ) , S a N S D E ( 1 , 4 ) , S a N S D E ( 2 , 3 ) , S a N S D E ( 2 , 4 ) , S a N S D E ( 3 , 4 ) outperform P S O in terms of the average number of generations for test Case 1 through test Case 12 and test Case 14. The only exception is test Case 13. For test Case 13, S a N S D E ( 1 , 2 ) , S a N S D E ( 1 , 4 ) , S a N S D E ( 2 , 4 ) , S a N S D E ( 3 , 4 ) outperform P S O in terms of the average number of generations. The average numbers of generations of S a N S D E ( 1 , 3 ) and S a N S D E ( 2 , 3 ) are greater than that of P S O for test CASE 13. In spite of these exceptions, S a N S D E ( 1 , 2 ) , S a N S D E ( 1 , 3 ) , S a N S D E ( 1 , 4 ) , S a N S D E ( 2 , 3 ) , S a N S D E ( 2 , 4 ) , S a N S D E ( 3 , 4 ) outperform P S O in terms of the average number of generations for most test cases.
In this paper, we apply the standard deviation to study the robustness property of the two-head self-adaptive algorithms. The standard deviation is used to measure the amount of variation of the average fitness values. The standard deviation presented in this paper is the same as the one defined in statistics. It can be calculated using Excel. Suppose there are N observations or outputs z 1 , z 2 , z 3 , …., z N obtained by experiments. The standard deviation is calculated by the formula i = 1 N ( z i z ¯ ) 2 N 1 , where z ¯ is the mean of z 1 , z 2 , z 3 , …., z N . In this study, the outputs of experiments, z 1 , z 2 , z 3 , …., z N , in the above formula are the average fitness values of experiments for all test cases.
To study the robustness property of the two-head self-adaptive algorithms, we calculated the average fitness value, standard deviation of average fitness values and normalized standard deviations of average fitness values of S a N S D E ( 1 , 2 ) , S a N S D E ( 1 , 3 ) , S a N S D E ( 1 , 4 ) , S a N S D E ( 2 , 3 ) , S a N S D E ( 2 , 4 ) , S a N S D E ( 3 , 4 ) . The results are shown in the three columns on the left in Table 13. It indicates that the normalized standard deviation of average fitness values of all of these two-head self-adaptive algorithms is zero for Case 1 through Case 12 and is very small for Case 13 and Case 14.
To compare the robustness property of the two-head self-adaptive algorithms with D E   ( 1 ) , D E   ( 2 ) , D E   ( 3 ) and D E   ( 4 ) , we also calculate the average fitness value, standard deviation of average fitness values and normalized standard deviation of average fitness values of D E   ( 1 ) , D E   ( 2 ) , D E   ( 3 ) and D E   ( 4 ) . The results are shown in the three columns on the right in Table 13. The results indicate that the normalized standard deviations of average fitness values of all of these single-head DE algorithms are significantly larger than those of the two-head algorithms for all cases. In summary, the two-head self-adaptive algorithms are more robust than the single-head DE algorithms.
We also compared the robustness property of the two-head self-adaptive algorithms with the P S O algorithm. The results are shown in Table 14. It indicates that the normalized standard deviation of the P S O algorithm is significantly larger than those of the two-head algorithms for all cases. In summary, the two-head self-adaptive algorithms are more robust than the P S O algorithm.

6. Discussion

With its long history, the saying “Two heads are better than one” has been widely used in daily life to encourage two people to work together to solve a problem and obtain a better solution rather than working alone. However, the application of this saying in creating new optimization algorithms is still limited. Although the old saying works effectively for human beings to solve problems, whether it can also work for developers to create effective optimization problem solvers in the realm of artificial intelligence is an interesting research question. In this study, we have validated the effectiveness of creating new self-adaptive algorithms based on this old saying to solve the DGRP. In our previous study, one self-adaptive algorithm was proposed to solve the DGRP. In this study, we have demonstrated how to create a series of self-adaptive algorithms based on the old saying and validate the effectiveness of this approach based on experiments and comparison with the algorithms proposed previously. We created two-head self-adaptive algorithms, each with two mutation strategies arbitrarily selected from a set of four mutation strategies. Six self-adaptive algorithms were created. To validate whether the old saying holds for the six two-head self-adaptive algorithms created, we applied each of the six two-head self-adaptive algorithms and also the corresponding two single-head algorithms to solve several test cases. Then we compared the results of experiments.
Due to the additional constraints in the DGRP, the computational complexity of the problem addressed in this paper is at least NP-hard. There is no known efficient method to find the optimal solution for NP-hard problems. Evolutionary algorithms are widely used to find approximate solutions for NP-hard problems. Scalability is an important property that determines whether an algorithm can solve a problem effectively as the scale of the problem grows. A scalable algorithm works effectively as the problem dimension (or the number of decision variables) is increased significantly. For the results in Table 12, performance of the P S O algorithm deteriates seriously for test Case 8 through test Case 14 whereas the two-headed algorithms S a N S D E ( 1 , 2 ) , S a N S D E ( 1 , 3 ) , S a N S D E ( 1 , 4 ) , S a N S D E ( 2 , 3 ) , S a N S D E ( 2 , 4 ) , S a N S D E ( 3 , 4 ) work effectively for all test cases. This shows that the two-headed algorithms S a N S D E ( 1 , 2 ) , S a N S D E ( 1 , 3 ) , S a N S D E ( 1 , 4 ) , S a N S D E ( 2 , 3 ) , S a N S D E ( 2 , 4 ) , S a N S D E ( 3 , 4 ) are more scalable in comparison with P S O .
Although the performance of each two-head SaNSDE algorithm is either the same as or better than that of the corresponding two single-head DE algorithms for smaller cases, each two-head SaNSDE algorithm outperforms the corresponding two single-head DE algorithms for bigger cases. For example, S a N S D E ( 1 , 2 ) significantly outperforms D E   ( 1 ) and D E   ( 2 ) for larger cases, Case 12, Case 13 and Case 14. This shows that a two-head SaNSDE algorithm is more scalable than the corresponding two single-head DE algorithms. Thus, the two-head SaNSDE algorithms created based on the proposed method improve scalability. The results indicated that each two-head self-adaptive algorithm outperforms the corresponding two single-head algorithms. By calculating the normalized standard deviation of average fitness values of all two-head self-adaptive algorithms and the single-head algorithms in solving the test cases, it is indicated that the normalized standard deviation of average fitness values of all of these single-head DE algorithms is significantly larger than that of the two-head algorithms for all cases. Based on these results, a new finding is that the two-head self-adaptive algorithms are more robust than the single-head DE algorithms. Another finding of this study is that the old saying not only works effectively for human beings to solve problems but also works effectively in the creation of new self-adaptive algorithms to solve the DGRP. In other words, the effectiveness of applying the old saying can be transferred from solving a problem for human beings to solving an optimization problem in the realm of artificial intelligence. The old saying provides a simple and systematic approach to the development of effective optimization problem solvers in artificial intelligence.
Although the results presented in this study show that the two-head self-adaptive algorithms are more scalable than the corresponding standard DE algorithms and the standard PSO algorithm for the DGRP with dimensions no greater than 100, finding the largest dimension of the DGRP instance that can be solved effectively by the two-head self-adaptive algorithms requires further study and is an interesting future research directions as it is non-trivial. To the best of our knowledge, there is no “one size fits all” evolutionary algorithm that is able to effectively solve a problem of arbitrary size. When an evolutionary algorithm cannot work effectively for a large problem, several mechanisms such as problem decomposition and sub-populations [41] can be applied to improve the scalability of the evolutionary algorithm. Combining such mechanisms with the two-head self-adaptive algorithms developed in this paper is non-trivial and is an interesting future research direction to further increase the scalability of the two-head self-adaptive algorithms developed in this paper.

7. Conclusions

Ridesharing is one popular transport mode to enhance the sustainability of cities. To incentivize drivers and riders to accept ridesharing through the guarantee of a discount, the DGRP has been formulated. Due to computational complexity issues, metaheuristic algorithms can be developed to solve the DGRP. Metaheuristic algorithms have been widely used in many fields such as management, engineering, operation research, transportation and design due to their ability to solve complex optimization problems that cannot be solved by classical optimization methods. For problems in which the gradient of the objective function cannot be obtained, metaheuristic algorithms can still be applied. Therefore, metaheuristic algorithms are not problem specific and can be applied to solve problems in a variety of domains without restriction. Metaheuristic algorithms are typical stochastic optimization algorithms that create candidate trial solutions iteratively based on the candidate solutions found previously to improve the solution quality. Although there is no guarantee that the global optimum of a problem can always be found by applying metaheuristic algorithms, metaheuristic algorithms usually produce quality solutions that are useful in practice. Due to the advantages mentioned above, a lot of metaheuristic algorithms have been proposed to solve complex optimization problems in different problem domains. In the literature, metaheuristic algorithms were created in different ways based on ideas from physics, the concept of swarms in natural or artificial systems, the phenomena of biological evolution, simulating the behavior of different entities in games and with inspiration from human activities or human intelligence such as sayings or proverbs. In this paper, we focused on validation of the effectiveness of creating new self-adaptive differential evolution algorithms based on the old saying “Two heads are better than one” to solve the DGRP.
Each new self-adaptive differential evolution algorithm developed in this paper is a two-head self-adaptive algorithm created based on selecting two different mutation strategies from a set of four mutation strategies. A total of six two-head self-adaptive algorithms were created. To validate whether the old saying holds for the six self-adaptive algorithms created, we compared the results obtained by applying each of the six two-head self-adaptive algorithms with the corresponding two single-head algorithms and PSO algorithm to solve several test cases. The results indicate that each two-head self-adaptive algorithm outperforms the corresponding two single-head algorithms and PSO algorithm in solving most test cases. One new finding is that the two-head self-adaptive algorithms created based on the saying are more scalable and robust than the corresponding single-head DE algorithms and P S O algorithm for problem instances with a number of decision variables no greater than 100, the problem size used in the CEC 2017 Competition for Large-Scale Global Optimization (LSGO). Another finding of this study is that the old saying that works effectively for human beings to solve problems also works effectively in the creation of new self-adaptive algorithms to solve the DGRP. In one of our previous papers, we demonstrated the validity of applying the saying “Two heads are better than one” to develop two-head self-adaptive algorithms to solve another instance of the ridesharing problem for trust-based shared mobility systems. The results of our previous paper and current paper are encouraging as we have demonstrated that applying the above saying to develop two-head self-adaptive algorithms to solve the ridesharing problem in trust-based shared mobility systems and the DGRP is effective. The results of this study have limitations. The strategies used in this study are mutation strategies in DE. Whether the results still hold for strategies of different metaheuristic algorithms requires further study. Although the two-head self-adaptive algorithms created in this study can work effectively for instances of problems with dimensions no greater than 100, finding the largest DGRP instance that can be solved effectively by the two-head self-adaptive algorithms and developing a method to increase the scalability of the proposed algorithms to tackle larger instances of the DGRP are challenging future research directions. Another future research direction is to study whether the way to develop two-head self-adaptive algorithms in this study is effective for other problems. Although the two-headed algorithms are proposed for single-objective optimization problems, they can be applied to solve multi-objective optimization problems based on the weighted sum method for multi-objective optimization. The other interesting future research direction is to study the effectiveness of applying the two-headed algorithms to ridesharing problems with multi-objective functions.

Funding

This research was supported in part by the National Science and Technology Council, Taiwan, under Grant NSTC 111-2410-H-324-003.

Institutional Review Board Statement

Not applicable.

Informed Consent Statement

Not applicable.

Data Availability Statement

The original data presented in the study are available in https://drive.google.com/drive/folders/1COGSGVgo9bjjqpUdY29bxptyO6LlzXKd?usp=sharing.

Conflicts of Interest

The author declares no conflicts of interest.

References

  1. Uber. Available online: https://www.uber.com (accessed on 29 November 2023).
  2. Lyft. Available online: https://www.lyft.com (accessed on 29 November 2023).
  3. Agatz, N.; Erera, A.; Savelsbergh, M.; Wang, X. Optimization for dynamic ride-sharing: A review. Eur. J. Oper. Res. 2012, 223, 295–303. [Google Scholar] [CrossRef]
  4. Furuhata, M.; Dessouky, M.; Ordóñez, F.; Brunet, M.; Wang, X.; Koenig, S. Ridesharing: The state-of-the-art and future direc-tions. Transp. Res. Part B Methodol. 2013, 57, 28–46. [Google Scholar] [CrossRef]
  5. Hsieh, F.-S.; Zhan, F.; Guo, Y. A solution methodology for carpooling systems based on double auctions and cooperative coevolutionary particle swarms. Appl. Intell. 2019, 49, 741–763. [Google Scholar] [CrossRef]
  6. Jin, Q.; Li, B.; Cheng, Y.; Zhao, X. Real-time Multi-platform Route Planning in ridesharing. Expert Syst. Appl. 2024, 255, 124819. [Google Scholar] [CrossRef]
  7. Nitter, J.; Yang, S.; Fagerholt, K.; Ormevik, A.B. The static ridesharing routing problem with flexible locations: A Norwegian case study. Comput. Oper. Res. 2024, 167, 106669. [Google Scholar] [CrossRef]
  8. Li, W.; Yu, T.; Zhang, Y.; Chen, X. A shared ride matching approach to low-carbon and electrified ridesplitting. J. Clean. Prod. 2024, 467, 143031. [Google Scholar] [CrossRef]
  9. Mourad, A.; Puchinger, J.; Chu, C. A survey of models and algorithms for optimizing shared mobility. Transp. Res. Part B Methodol. 2019, 123, 323–346. [Google Scholar] [CrossRef]
  10. Martins, L.C.; Torre, R.; Corlu, C.G.; Juan, A.A.; Masmoudi, M.A. Optimizing ride-sharing operations in smart sustainable cities: Challenges and the need for agile algorithms. Comput. Ind. Eng. 2021, 153, 107080. [Google Scholar] [CrossRef]
  11. Ting, K.H.; Lee, L.S.; Pickl, S.; Seow, H.-V. Shared Mobility Problems: A Systematic Review on Types, Variants, Characteristics, and Solution Approaches. Appl. Sci. 2021, 11, 7996. [Google Scholar] [CrossRef]
  12. Hu, S.; Dessouky, M.M.; Uhan, N.A.; Vayanos, P. Cost-sharing mechanism design for ride-sharing. Transp. Res. Part B Methodol. 2021, 150, 410–434. [Google Scholar] [CrossRef]
  13. Jiao, G.; Ramezani, M. Incentivizing shared rides in e-hailing markets: Dynamic discounting. Transp. Res. Part C Emerg. Technol. 2022, 144, 103879. [Google Scholar] [CrossRef]
  14. Hsieh, F.-S. A Comparison of Three Ridesharing Cost Savings Allocation Schemes Based on the Number of Acceptable Shared Rides. Energies 2021, 14, 6931. [Google Scholar] [CrossRef]
  15. Hsieh, F.-S. Improving Acceptability of Cost Savings Allocation in Ridesharing Systems Based on Analysis of Proportional Methods. Systems 2023, 11, 187. [Google Scholar] [CrossRef]
  16. Hsieh, F.-S. A Self-Adaptive Meta-Heuristic Algorithm Based on Success Rate and Differential Evolution for Improving the Performance of Ridesharing Systems with a Discount Guarantee. Algorithms 2024, 17, 9. [Google Scholar] [CrossRef]
  17. Dehghani, M.; Trojovská, E.; Trojovský, P. A new human-based metaheuristic algorithm for solving optimization problems on the base of simulation of driving training process. Sci. Rep. 2022, 12, 9924. [Google Scholar] [CrossRef]
  18. Guilmeau, T.; Chouzenoux, E.; Elvira, V. Simulated Annealing: A Review and a New Scheme. In Proceedings of the 2021 IEEE Statistical Signal Processing Workshop (SSP), Rio de Janeiro, Brazil, 11–14 July 2021; pp. 101–105. [Google Scholar] [CrossRef]
  19. Rashedi, E.; Nezamabadi-pour, H.; Saryazdi, S. GSA: A Gravitational Search Algorithm. Inf. Sci. 2009, 179, 2232–2248. [Google Scholar] [CrossRef]
  20. Eberhart, R.C.; Shi, Y. Comparison between genetic algorithms and particle swarm optimization. In Lecture Notes in Computer Science; Springer: Berlin/Heidelberg, Germany, 1998; Volume 1447, pp. 611–616. [Google Scholar]
  21. Yang, X.S. Firefly algorithms for multimodal optimization. In Lecture Notes in Computer Science; Springer: Berlin/Heidelberg, Germany, 2009; Volume 5792, pp. 169–178. [Google Scholar]
  22. Karaboga, D.; Basturk, B. Artificial Bee Colony (ABC) Optimization Algorithm for Solving Constrained Optimization Problems. In Lecture Notes in Computer Science; Melin, P., Castillo, O., Aguilar, L.T., Kacprzyk, J., Pedrycz, W., Eds.; Foundations of Fuzzy Logic and Soft Computing (IFSA): Cancun, Mexico, 2007; Volume 4529, pp. 789–798. [Google Scholar]
  23. Dorigo, M.; Birattari, M.; Stutzle, T. Ant colony optimization. IEEE Comput. Intell. Mag. 2006, 1, 28–39. [Google Scholar] [CrossRef]
  24. Holland, J.H. Adaptation in Natural and Artificial Systems; MIT Press: Cambridge, MA, USA, 1992. [Google Scholar]
  25. Storn, R.; Price, K. Differential evolution—A simple and efficient heuristic for global optimization over continuous spaces. J. Glob. Optim. 1997, 11, 341–359. [Google Scholar] [CrossRef]
  26. Dehghani, M.; Mardaneh, M.; Guerrero, J.M.; Malik, O.; Kumar, V. Football game based optimization: An application to solve energy commitment problem. Int. J. Intell. Eng. Syst. 2020, 13, 514–523. [Google Scholar] [CrossRef]
  27. Moghdani, R.; Salimifard, K. Volleyball premier league algorithm. Appl. Soft Comput. 2018, 64, 161–185. [Google Scholar] [CrossRef]
  28. Rao, R.V.; Savsani, V.J.; Vakharia, D. Teaching-learning-based optimization: A novel method for constrained mechanical design optimization problems. Comput. Aided Des. 2011, 43, 469–492. [Google Scholar] [CrossRef]
  29. Moosavi, S.H.S.; Bardsiri, V.K. Poor and rich optimization algorithm: A new human-based and multi populations algorithm. Eng. Appl. Artif. Intell. 2019, 86, 165–181. [Google Scholar] [CrossRef]
  30. Mousavirad, S.J.; Ebrahimpour-Komleh, H. Human mental search: A new population-based metaheuristic optimization algorithm. Appl. Intell. 2017, 47, 850–887. [Google Scholar] [CrossRef]
  31. Su, H.; Chen, R.; Tang, S.; Zheng, X.; Li, J.; Yin, Z.; Ouyang, W.; Dong, N. Two Heads Are Better Than One: A Multi-Agent System Has the Potential to Improve Scientific Idea Generation. arXiv 2024, arXiv:2410.09403. [Google Scholar]
  32. Yuan, L.; Zhang, Z.; Li, L.; Guan, C.; Yu, Y. A Survey of Progress on Cooperative Multi-Agent Reinforcement Learning in Open Environment. arXiv 2023, arXiv:2312.01058. [Google Scholar] [CrossRef]
  33. Dyba, T.; Arisholm, E.; Sjoberg, D.; Hannay, J.; Shull, F. Are Two Heads Better than One? On the Effectiveness of Pair Programming. IEEE Softw. 2007, 24, 12–15. [Google Scholar] [CrossRef]
  34. Li, Z.; Xie, L.; Song, H. Two heads are better than one: Dual systems obtain better performance in facial comparison. Forensic Sci. Int. 2023, 353, 111879. [Google Scholar] [CrossRef]
  35. Jindal, S.; Sparks, A. Synergy at work: Two heads are better than one. J. Assist. Reprod. Genet. 2018, 35, 1227–1228. [Google Scholar] [CrossRef]
  36. Li, A.; Liu, W.; Zheng, C.; Fan, C.; Li, X. Two Heads are Better Than One: A Two-Stage Complex Spectral Mapping Approach for Monaural Speech Enhancement. IEEE/ACM Trans. Audio Speech Lang. Process. 2021, 29, 1829–1843. [Google Scholar] [CrossRef]
  37. Hsieh, F.-S. Comparison of a Hybrid Firefly–Particle Swarm Optimization Algorithm with Six Hybrid Firefly–Differential Evolution Algorithms and an Effective Cost-Saving Allocation Method for Ridesharing Recommendation Systems. Electronics 2024, 13, 324. [Google Scholar] [CrossRef]
  38. Hsieh, F.-S. Applying “Two Heads Are Better Than One” Human Intelligence to Develop Self-Adaptive Algorithms for Ridesharing Recommendation Systems. Electronics 2024, 13, 2241. [Google Scholar] [CrossRef]
  39. Deb, K. An efficient constraint handling method for genetic algorithms. Comput. Methods Appl. Mech. Eng. 2000, 186, 311–338. [Google Scholar] [CrossRef]
  40. Data of Test Cases 1–14. Available online: https://drive.google.com/drive/folders/1COGSGVgo9bjjqpUdY29bxptyO6LlzXKd?usp=sharing (accessed on 26 February 2025).
  41. Wu, G.; Mallipeddi, R.; Suganthan, P.N. Problem Definitions and Evaluation Criteria for the CEC 2017 Competition and Special Session on Constrained Single Objective Real-Parameter Optimization, Technical Report, 2016. Available online: https://www.researchgate.net/publication/317228117_Problem_Definitions_and_Evaluation_Criteria_for_the_CEC_2017_Competition_and_Special_Session_on_Constrained_Single_Objective_Real-Parameter_Optimization#fullTextFileContent (accessed on 7 March 2025).
Figure 1. A flowchart of a two-head self-adaptive algorithm with two strategies.
Figure 1. A flowchart of a two-head self-adaptive algorithm with two strategies.
Applsci 15 03144 g001
Figure 2. Comparison of S a N S D E ( 1 , 2 ) , D E   ( 1 ) and D E   ( 2 ) .
Figure 2. Comparison of S a N S D E ( 1 , 2 ) , D E   ( 1 ) and D E   ( 2 ) .
Applsci 15 03144 g002
Figure 3. Comparison of S a N S D E ( 1 , 3 ) with D E   ( 1 ) and D E   ( 3 ) .
Figure 3. Comparison of S a N S D E ( 1 , 3 ) with D E   ( 1 ) and D E   ( 3 ) .
Applsci 15 03144 g003
Figure 4. Comparison of S a N S D E ( 1 , 4 ) with D E   ( 1 ) and D E   ( 4 ) .
Figure 4. Comparison of S a N S D E ( 1 , 4 ) with D E   ( 1 ) and D E   ( 4 ) .
Applsci 15 03144 g004
Figure 5. Comparison of S a N S D E ( 2 , 3 ) with D E   ( 2 ) and D E   ( 3 ) .
Figure 5. Comparison of S a N S D E ( 2 , 3 ) with D E   ( 2 ) and D E   ( 3 ) .
Applsci 15 03144 g005
Figure 6. Comparison of S a N S D E ( 2 , 4 ) with D E   ( 2 ) and D E   ( 4 ) .
Figure 6. Comparison of S a N S D E ( 2 , 4 ) with D E   ( 2 ) and D E   ( 4 ) .
Applsci 15 03144 g006
Figure 7. Comparison of S a N S D E ( 3 , 4 ) with D E   ( 3 ) and D E   ( 4 ) .
Figure 7. Comparison of S a N S D E ( 3 , 4 ) with D E   ( 3 ) and D E   ( 4 ) .
Applsci 15 03144 g007
Figure 8. Comparison of S a N S D E ( 1 , 2 ) , S a N S D E ( 1 , 3 ) , S a N S D E ( 1 , 4 ) , S a N S D E ( 2 , 3 ) , S a N S D E ( 2 , 4 ) , S a N S D E ( 3 , 4 ) and P S O .
Figure 8. Comparison of S a N S D E ( 1 , 2 ) , S a N S D E ( 1 , 3 ) , S a N S D E ( 1 , 4 ) , S a N S D E ( 2 , 3 ) , S a N S D E ( 2 , 4 ) , S a N S D E ( 3 , 4 ) and P S O .
Applsci 15 03144 g008
Table 1. Notation of parameters, variables and symbols.
Table 1. Notation of parameters, variables and symbols.
VariableMeaning
P The number of passengers.
D The number of drivers.
p An   integer   index   for   a   passenger ,   where   p { 1 , 2 , 3 , . P } .
d An   integer   index   for   a   driver ,   where   d { 1 , 2 , 3 , . , D } .
K The number of locations of passengers , where K is equal to P .
k An   integer   index   for   a   location ,   where   k { 1 , 2 , , K } .
J d The number of bids submitted by driver d , where d { 1 , 2 , 3 , . , D } .
j The   index   of   the   j - t h bid submitted by driver d ,   where   d { 1 , 2 , 3 , . , D }   and   j { 1 , 2 , , J d } .
D B d j The   j - t h bid submitted by driver d   with   D B d j = ( q d j 1 1 , q d j 2 1 , q d j 3 1 , , q d j P 1 , q d j 1 2 , q d j 2 2 , q d j 3 2 , , q d j P 2 , o d j , c d j ) , where
q d j k 1 The number of seats allocated to the pick-up location k   in   the   j t h bid of driver d ,
q d j k 2 The number of seats released at the drop-off location k   in   the   j - t h bid of driver d ,
o d j :   The   original   cos t   of   driver   d { 1 , 2 , , D } when the driver travels alone,
c d j : The transport cost of the bid.
P B p The bid of passenger p   with   P B p = ( s p 1 1 , s p 2 1 , s p 3 1 , , s p P 1 , s p 1 2 , s p 2 2 , s p 3 2 , s p P 2 , f p ) ,   where   p { 1 , 2 , 3 , . P } ,
s p k : the number of seats requested by p at location k and
f p : the passenger p ’s original transport cost when he/she travels alone.
s p k 1 No. of seats requested at the pick-up location k ,   s p p 1 > 0   and   s p k 1 = 0 k p .
s p k 2 No. of seats released at the drop-off location k ,   s p p 2 > 0   and   s p k 2 = 0 k p .
x d j Driver d s   decision   variable ,   where   d { 1 , 2 , , D }   and   j { 1 , 2 , , J d } :   x d j = 1   if   D B d j   is   a   winning   bid   and   x d j = 0 otherwise.
y p Passenger p s   decision   variable ,   where   p { 1 , 2 , 3 , . P } :   y p = 1   if   P B p   is   a   winning   bid   and   y p = 0 otherwise.
r D Drivers’ minimal expected discount.
r P Passengers’ minimal expected discount.
F ( x , y ) The objective function,
F ( x , y ) = p = 1 P y p f p + d = 1 D j = 1 J d x d j o d j d = 1 D j = 1 J d x d j c d j .
Γ d j The   set   of   passengers   on   the   ride   of   the   j - t h   bid   D B d j of driver d ,   where   d { 1 , 2 , , D } .
F d j ( x , y ) Cost   savings   of   the   j t h   bid   D B d j   of   driver   d ,   where   d { 1 , 2 , , D } .
F d j ( x , y ) = p Γ d j y p f p + x d j o d j x d j c d j .
c f p d j Transport   cos t   for   passenger   p Γ d j   on   the   ride   of   the   j - t h   bid   D B d j ,   where   d { 1 , 2 , , D } .
U ( 0 , 1 ) Uniform distribution.
N ( μ , σ 2 ) Gaussian distribution.
Table 2. Definition of function B i n a r y T r a s f o r m .
Table 2. Definition of function B i n a r y T r a s f o r m .
Function  B i n a r y T r a s f o r m
Input: u
Output: u ¯
Step 1: Calculate s ( v ) = 1 1 + exp v , where v = V max i f u > V max u i f V max u V max V max i f u < V max
Step 2: Calculate u ¯ = 1 r s i d < s ( v ) 0 o t h e r w i s e , where r s i d is a random variable with uniform distribution U ( 0 , 1 )
Table 3. Four DE mutation strategies.
Table 3. Four DE mutation strategies.
DE Mutation StrategyDefinition
s = 1 μ 1 = v ( t + 1 ) i n = z t r 1 n + F i ( z t r 2 n z t r 3 n ) (11)
s = 2 μ 2 = v ( t + 1 ) i n = z t b n + F i ( z t r 2 n z t r 3 n ) (12)
s = 3 μ 3 = v ( t + 1 ) i n = z t r 1 n + F i ( z t r 2 n z t r 3 n ) + F i ( z t r 4 n z t r 5 n ) (13)
s = 4 μ 4 = v ( t + 1 ) i n = z t b n + F i ( z t r 1 n z t r 2 n ) + F i ( z t r 3 n z t r 4 n ) (14)
Table 4. Experiments to Validate “Two Heads Are Better Than One”.
Table 4. Experiments to Validate “Two Heads Are Better Than One”.
ExperimentTwo-Head Self-Adaptive AlgorithmSingle-Head Algorithm 1Single-Head Algorithm 2
1 S a N S D E ( 1 , 2 ) D E   ( 1 ) D E   ( 2 )
2 S a N S D E ( 1 , 3 ) D E   ( 1 ) D E   ( 3 )
3 S a N S D E ( 1 , 4 ) D E   ( 1 ) D E   ( 4 )
4 S a N S D E ( 2 , 3 ) D E   ( 2 ) D E   ( 3 )
5 S a N S D E ( 2 , 4 ) D E   ( 2 ) D E   ( 4 )
6 S a N S D E ( 3 , 4 ) D E   ( 3 ) D E   ( 4 )
Table 5. Parameters used in the algorithms S a N S D E ( s 1 , s 2 ) , D E ( s 1 ) , D E ( s 2 ) and P S O .
Table 5. Parameters used in the algorithms S a N S D E ( s 1 , s 2 ) , D E ( s 1 ) , D E ( s 2 ) and P S O .
AlgorithmParameters for Case 1 Through Case 10Parameters for Case 11 Through Case 14
S a N S D E ( s 1 , s 2 ) POP = 30, Gen = 1000, LP = 1000POP = 30, Gen = 50,000, LP = 1000
D E ( s 1 ) POP = 30, Gen = 1000, C R = 0.5
F : a value arbitrarily selected from uniform (0, 2)
POP = 30, Gen = 50,000, C R = 0.5
F : a value arbitrarily selected from uniform (0, 2)
D E ( s 2 ) POP = 30, Gen = 1000, C R = 0.5
F : a value arbitrarily selected from uniform (0, 2)
POP = 30, Gen = 50,000, C R = 0.5
F : a value arbitrarily selected from uniform (0, 2)
P S O POP = 30, Gen = 1000, c 1 = 0.4, c 2 = 0.6,
ω = 0.4
POP = 30, Gen = 50,000, c 1 = 0.4, c 2 = 0.6,
ω = 0.4
Table 6. The results (average fitness values and average number of generations) obtained by applying S a N S D E ( 1 , 2 ) , D E   ( 1 ) and D E   ( 2 ) to the test cases.
Table 6. The results (average fitness values and average number of generations) obtained by applying S a N S D E ( 1 , 2 ) , D E   ( 1 ) and D E   ( 2 ) to the test cases.
Test CaseParticipant
(D/P)
S a N S D E ( 1 , 2 ) D E ( 1 ) D E ( 2 )
14/1032.998/8.832.998/16.632.998/16.7
25/1163.615/16.663.615/32.250.8922/108.5
35/1241.715/16.741.715/39.741.715/26
46/1251.11/27.333351.11/43.850.9085/28.3
57/1330.063/16.530.063/31.826.1379/48.5
68/1473.328/37.772.328/48.368.6614/274
79/1589.03/34.389.03/101.486.9898/82.9
810/1654.02/31.154.02/61.353.7046/77.2
911/1774.05/31.874.05/59.765.5666/103.3
1012/1850.9/94.450.0623/136.847.1623/121.1
1120/20112.906/80.4112.906/436.5104.6357/14,180.2
1230/30202.15/506196.9089/5691.5172.4707/9605.8
1340/40204.767/977.8888190.1664/15,185.8161.4642/7370.7
1450/50212.8168/9680.2137.1625/17,131.7135.1454/19,995.4
Table 7. The results (average fitness values and average number of generations) obtained by applying S a N S D E ( 1 , 3 ) , D E   ( 1 ) and D E   ( 3 ) to the test cases.
Table 7. The results (average fitness values and average number of generations) obtained by applying S a N S D E ( 1 , 3 ) , D E   ( 1 ) and D E   ( 3 ) to the test cases.
Test CaseParticipant
(D/P)
S a N S D E ( 1 , 3 ) D E ( 1 ) D E ( 3 )
14/1032.998/1032.998/16.632.998/19.8
25/1163.615/20.263.615/32.263.615/36.9
35/1241.715/22.741.715/39.741.715/47.6
46/1251.11/2151.11/43.851.11/50.3
57/1330.063/17.530.063/31.830.063/44.1
68/1473.328/27.472.328/48.372.328/67.8
79/1589.03/58.689.03/101.489.03/135
810/1654.02/41.254.02/61.354.02/78.6
911/1774.05/41.874.05/59.774.05/65
1012/1850.9/98.350.0623/136.850.9/146.3
1120/20112.906/180112.906/436.5112.906/817.1
1230/30202.15/1312.5196.9089/5691.5200.1078/16,065.1
1340/40204.5972/25,142.5555190.1664/15,185.8179.4224/27,574.4
1450/50199.3987/19,383.1137.1625/17,131.7171.1756/22,745.3
Table 8. The results (average fitness values and average number of generations) obtained by applying S a N S D E ( 1 , 4 ) , D E   ( 1 ) and D E   ( 4 ) to the test cases.
Table 8. The results (average fitness values and average number of generations) obtained by applying S a N S D E ( 1 , 4 ) , D E   ( 1 ) and D E   ( 4 ) to the test cases.
Test CaseParticipant
(D/P)
S a N S D E ( 1 , 4 ) D E ( 1 ) D E ( 4 )
14/1032.998/6.932.998/16.632.4747/11.9
25/1163.615/13.863.615/32.263.615/91.3
35/1241.715/16.341.715/39.741.715/33.9
46/1251.11/12.444451.11/43.851.11/34.3
57/1330.063/11.730.063/31.830.063/32.7
68/1473.328/22.172.328/48.372.328/40.3
79/1589.03/48.189.03/101.485.323/144.8
810/1654.02/29.554.02/61.353.2235/101.1
911/1774.05/31.774.05/59.773.1033/80.1
1012/1850.9/15950.0623/136.840.9119/229.7
1120/20112.906/65.5112.906/436.5112.1708/7149.9
1230/30202.15/673.8196.9089/5691.5188.1118/7445.2
1340/40204.767/14,112.8888190.1664/15,185.8194.7417/9897
1450/50205.7637/20,417.9137.1625/17,131.7161.954/14,146.2
Table 9. The results (average fitness values and average number of generations) obtained by applying S a N S D E ( 2 , 3 ) , D E   ( 2 ) and D E   ( 3 ) to the test cases.
Table 9. The results (average fitness values and average number of generations) obtained by applying S a N S D E ( 2 , 3 ) , D E   ( 2 ) and D E   ( 3 ) to the test cases.
Test CaseParticipant
(D/P)
S a N S D E ( 2 , 3 ) D E ( 2 ) D E ( 3 )
14/1032.998/6.832.998/16.732.998/19.8
25/1163.615/17.350.8922/108.563.615/36.9
35/1241.715/19.541.715/2641.715/47.6
46/1251.11/18.222250.9085/28.351.11/50.3
57/1330.063/14.426.1379/48.530.063/44.1
68/1473.328/17.168.6614/27472.328/67.8
79/1589.03/3386.9898/82.989.03/135
810/1654.02/3553.7046/77.254.02/78.6
911/1774.05/39.865.5666/103.374.05/65
1012/1850.9/143.447.1623/121.150.9/146.3
1120/20112.906/103.8104.6357/14,180.2112.906/817.1
1230/30202.15/1618.1172.4707/9605.8200.1078/16,065.1
1340/40202.1562/26,276161.4642/7370.7179.4224/27,574.4
1450/50195.4592/19,867.1135.1454/19,995.4171.1756/22,745.3
Table 10. The results (average fitness values and average number of generations) obtained by applying S a N S D E ( 2 , 4 ) , D E   ( 2 ) and D E   ( 4 ) to the test cases.
Table 10. The results (average fitness values and average number of generations) obtained by applying S a N S D E ( 2 , 4 ) , D E   ( 2 ) and D E   ( 4 ) to the test cases.
Test CaseParticipant
(D/P)
S a N S D E ( 2 , 4 ) D E ( 2 ) D E ( 4 )
14/1032.998/12.532.998/16.732.4747/11.9
25/1163.615/36,850.8922/108.563.615/91.3
35/1241.715/37.941.715/2641.715/33.9
46/1251.11/17.444450.9085/28.351.11/34.3
57/1330.063/24.626.1379/48.530.063/32.7
68/1473.328/29.968.6614/27472.328/40.3
79/1589.03/35.786.9898/82.985.323/144.8
810/1654.02/40.353.7046/77.253.2235/101.1
911/1774.05/34.865.5666/103.373.1033/80.1
1012/1850.9/7047.1623/121.140.9119/229.7
1120/20112.906/105.9104.6357/14,180.2112.1708/7149.9
1230/30202.15/1702.5172.4707/9605.8188.1118/7445.2
1340/40203.7134/3753.8888161.4642/7370.7194.7417/9897
1450/50195.305/13,134.6135.1454/19,995.4161.954/14,146.2
Table 11. The results (average fitness values and average number of generations) obtained by applying S a N S D E ( 3 , 4 ) , D E   ( 3 ) and D E   ( 4 ) to the test cases.
Table 11. The results (average fitness values and average number of generations) obtained by applying S a N S D E ( 3 , 4 ) , D E   ( 3 ) and D E   ( 4 ) to the test cases.
Test CaseParticipant
(D/P)
S a N S D E ( 3 , 4 ) D E ( 3 ) D E ( 4 )
14/1032.998/9.232.998/19.832.4747/11.9
25/1163.615/1463.615/36.963.615/91.3
35/1241.715/18.141.715/47.641.715/33.9
46/1251.11/14.777751.11/50.351.11/34.3
57/1330.063/1830.063/44.130.063/32.7
68/1473.328/19.372.328/67.872.328/40.3
79/1589.03/62.689.03/13585.323/144.8
810/1654.02/25.554.02/78.653.2235/101.1
911/1774.05/45.374.05/6573.1033/80.1
1012/1850.9/89.150.9/146.340.9119/229.7
1120/20112.906/59.3112.906/817.1112.1708/7149.9
1230/30202.15/1395.6200.1078/16,065.1188.1118/7445.2
1340/40204.2402/5388.3333179.4224/27,574.4194.7417/9897
1450/50201.6048/7703.2171.1756/22,745.3161.954/14,146.2
Table 12. Average fitness values and average number of generations for S a N S D E ( 1 , 2 ) , S a N S D E ( 1 , 3 ) , S a N S D E ( 1 , 4 ) , S a N S D E ( 2 , 3 ) , S a N S D E ( 2 , 4 ) and S a N S D E ( 3 , 4 ) .
Table 12. Average fitness values and average number of generations for S a N S D E ( 1 , 2 ) , S a N S D E ( 1 , 3 ) , S a N S D E ( 1 , 4 ) , S a N S D E ( 2 , 3 ) , S a N S D E ( 2 , 4 ) and S a N S D E ( 3 , 4 ) .
Test CaseParticipant
(D/P)
S a N S D E ( 1 , 2 ) S a N S D E ( 1 , 3 ) S a N S D E ( 1 , 4 ) S a N S D E ( 2 , 3 ) S a N S D E ( 2 , 4 ) S a N S D E ( 3 , 4 ) P S O
14/1032.998/8.832.998/1032.998/6.932.998/6.832.998/12.532.998/9.232.998/64.6
25/1163.615/16.663.615/20.263.615/13.863.615/17.363.615/36,863.615/1463.615/299.6
35/1241.715/16.741.715/22.741.715/16.341.715/19.541.715/37.941.715/18.141.2892/394.5
46/1251.11/27.333351.11/2151.11/12.444451.11/18.222251.11/17.444451.11/14.777750.9085/320.9
57/1330.063/16.530.063/17.530.063/11.730.063/14.430.063/24.630.063/1828.4254/304.1
68/1473.328/37.773.328/27.473.328/22.173.328/17.173.328/29.973.328/19.370.2629/375.6
79/1589.03/34.389.03/58.689.03/48.189.03/3389.03/35.789.03/62.680.810/553.6
810/1654.02/31.154.02/41.254.02/29.554.02/3554.02/40.354.02/25.544.0023/447.5
911/1774.05/31.874.05/41.874.05/31.774.05/39.874.05/34.874.05/45.349.356/580.7
1012/1850.9/94.450.9/98.350.9/15950.9/143.450.9/7050.9/89.132.8349/489.3
1120/20112.906/80.4112.906/180112.906/65.5112.906/103.8112.906/105.9112.906/59.397.7979/21,314.5
1230/30202.15/506202.15/1312.5202.15/673.8202.15/1618.1202.15/1702.5202.15/1395.6141.6005/21,742.3
1340/40204.767/977.8204.5972/25,142.55204.767/14,112.88202.1562/26,276203.7134/3753.8204.2402/5388.33−1.5081/22,734.2
1450/50212.8168/9680.2199.3987/19,383.1205.7637/20,417.9195.4592/19,867.1195.305/13,134.6201.6048/7703.2−3.9598/25,822.2
Table 13. Average fitness value, standard deviation and normalized standard deviation of average fitness value of all S a N S D E ( s 1 , s 2 ) , where s 1 S , s 2 S , s 1 s 2 , and all D E ( s ) , where s S .
Table 13. Average fitness value, standard deviation and normalized standard deviation of average fitness value of all S a N S D E ( s 1 , s 2 ) , where s 1 S , s 2 S , s 1 s 2 , and all D E ( s ) , where s S .
Test CaseNo. of Participants
(D/P)
Average   Fitness   Value   of   all   S a N S D E ( s 1 , s 2 ) ,   Where   s 1 S , s 2 S , s 1 s 2 Standard   Deviation   of   the   Average   Fitness   Values   of   all   S a N S D E ( s 1 , s 2 ) ,   Where   s 1 S , s 2 S , s 1 s 2 Normalized   Standard   Deviation   of   the   Average   Fitness   Values   of   all   S a N S D E   ( s 1 , s 2 ) ,   Where   s 1 S , s 2 S , s 1 s 2 Average   Fitness   Value   of   all   D E ( s ) ,
Where   s S
Standard   Deviation   of   the   Average   Fitness   Values   of   all   D E ( s ) ,
Where   s S
Normalized   Standard   Deviation   of   the   Average   Fitness   Values   of   all   D E ( s ) ,
Where   s S
14/1032.9980032.867180.261650.007961
25/1163.6150060.43436.36140.105261
35/1241.7150041.71500
46/1251.110051.059630.100750.001973
57/1330.0630029.081731.962550.067484
68/1473.3280071.411351.83330.025672
79/1589.030087.59321.7931990.020472
810/1654.020053.742030.3763020.007002
911/1774.050071.692484.1082280.057303
1012/1850.90047.259134.5243930.095736
1120/20112.90600110.42813.881570.03515
1230/30202.1500189.399812.373370.065329
1340/40204.04021.0062780.004932181.448714.78950.081508
1450/50201.72476.7118090.033272151.359417.975660.118761
Table 14. Comparison of normalized standard deviation of all S a N S D E ( s 1 , s 2 ) , where s 1 S , s 2 S , s 1 s 2 , and P S O .
Table 14. Comparison of normalized standard deviation of all S a N S D E ( s 1 , s 2 ) , where s 1 S , s 2 S , s 1 s 2 , and P S O .
Test CaseParticipant
(D/P)
S a N S D E ( 1 , 2 ) S a N S D E ( 1 , 3 ) S a N S D E ( 1 , 4 ) S a N S D E ( 2 , 3 ) S a N S D E ( 2 , 4 ) S a N S D E ( 3 , 4 ) P S O
14/100000000
25/110000000
35/120000000.032609012
46/120000000.01251461
57/130000000.121451237
68/140000000.046748711
79/150000000.08085633
810/160000000.156139565
911/170000000.196896021
1012/180000000.323686078
1120/200000000.039230904
1230/300000000.128791918
1340/4000000.005530.0038750.4111
1450/500.027160.034130.03240.03940.045580.047040.146
Disclaimer/Publisher’s Note: The statements, opinions and data contained in all publications are solely those of the individual author(s) and contributor(s) and not of MDPI and/or the editor(s). MDPI and/or the editor(s) disclaim responsibility for any injury to people or property resulting from any ideas, methods, instructions or products referred to in the content.

Share and Cite

MDPI and ACS Style

Hsieh, F.-S. Creating Effective Self-Adaptive Differential Evolution Algorithms to Solve the Discount-Guaranteed Ridesharing Problem Based on a Saying. Appl. Sci. 2025, 15, 3144. https://doi.org/10.3390/app15063144

AMA Style

Hsieh F-S. Creating Effective Self-Adaptive Differential Evolution Algorithms to Solve the Discount-Guaranteed Ridesharing Problem Based on a Saying. Applied Sciences. 2025; 15(6):3144. https://doi.org/10.3390/app15063144

Chicago/Turabian Style

Hsieh, Fu-Shiung. 2025. "Creating Effective Self-Adaptive Differential Evolution Algorithms to Solve the Discount-Guaranteed Ridesharing Problem Based on a Saying" Applied Sciences 15, no. 6: 3144. https://doi.org/10.3390/app15063144

APA Style

Hsieh, F.-S. (2025). Creating Effective Self-Adaptive Differential Evolution Algorithms to Solve the Discount-Guaranteed Ridesharing Problem Based on a Saying. Applied Sciences, 15(6), 3144. https://doi.org/10.3390/app15063144

Note that from the first issue of 2016, this journal uses article numbers instead of page numbers. See further details here.

Article Metrics

Back to TopTop