Next Article in Journal
Effect of Data Augmentation on Deep-Learning-Based Segmentation of Long-Axis Cine-MRI
Next Article in Special Issue
IWO-IGA—A Hybrid Whale Optimization Algorithm Featuring Improved Genetic Characteristics for Mapping Real-Time Applications onto 2D Network on Chip
Previous Article in Journal
Exploring the Efficacy of Base Data Augmentation Methods in Deep Learning-Based Radiograph Classification of Knee Joint Osteoarthritis
Previous Article in Special Issue
Improved Load Frequency Control in Power Systems Hosting Wind Turbines by an Augmented Fractional Order PID Controller Optimized by the Powerful Owl Search Algorithm
 
 
Font Type:
Arial Georgia Verdana
Font Size:
Aa Aa Aa
Line Spacing:
Column Width:
Background:
Article

A Self-Adaptive Meta-Heuristic Algorithm Based on Success Rate and Differential Evolution for Improving the Performance of Ridesharing Systems with a Discount Guarantee

Department of Computer Science and Information Engineering, Chaoyang University of Technology, Taichung 413310, Taiwan
Algorithms 2024, 17(1), 9; https://doi.org/10.3390/a17010009
Submission received: 1 December 2023 / Revised: 22 December 2023 / Accepted: 22 December 2023 / Published: 25 December 2023
(This article belongs to the Special Issue Metaheuristic Algorithms in Optimal Design of Engineering Problems)

Abstract

:
One of the most significant financial benefits of a shared mobility mode such as ridesharing is cost savings. For this reason, a lot of studies focus on the maximization of cost savings in shared mobility systems. Cost savings provide an incentive for riders to adopt ridesharing. However, if cost savings are not properly allocated to riders or the financial benefit of cost savings is not sufficient to attract riders to use a ridesharing mode, riders will not accept a ridesharing mode even if the overall cost savings is significant. In a recent study, the concept of discount-guaranteed ridesharing has been proposed to provide an incentive for riders to accept ridesharing services through ensuring a minimal discount for drivers and passengers. In this study, an algorithm is proposed to improve the performance of the discount-guaranteed ridesharing systems. Our approach combines a success rate-based self-adaptation scheme with an evolutionary computation approach. We propose a new self-adaptive metaheuristic algorithm based on success rate and differential evolution for the Discount-Guaranteed Ridesharing Problem (DGRP). We illustrate effectiveness of the proposed algorithm by comparing the results obtained using our proposed algorithm with other competitive algorithms developed for this problem. Preliminary results indicate that the proposed algorithm outperforms other competitive algorithms in terms of performance and convergence rate. The results of this study are consistent with the empirical experience that two people working together are more likely to come to a correct decision than they would if working alone.

1. Introduction

Shared mobility is a paradigm of transport modes that enables the reduction of vehicles, traffic congestion, consumption of energy and emission of greenhouse gas in cities. Due to the potential benefits of shared mobility, different sharing models have emerged in the past years. These include ridesharing, car sharing and bike sharing. As all of these transport models are helpful for sustainability issues, relevant issues and problems have attracted researchers’ and practitioners’ attention in academia and industry. In particular, ridesharing has been implemented in university campuses [1], by companies [2] and by transport service providers such as Uber [3], Lyft [4] and BlaBlaCar [5].
In the literature, early studies of ridesharing focused on the problem of meeting the transport requirements of drivers and passengers. A lot of the work from the early ridesharing literature can be found in [6,7]. In these early works, the goal was to optimize total cost savings or total travel distance through matching drivers and passengers based on their itineraries. The ways to achieve this goal can include building a simulation environment to simulate the application scenarios or formulating an optimization model to solve the ridesharing problems. Due to the wide variety of ridesharing problems, different models were proposed and studied in the past years. Optimization methods were applied to formulate the ridesharing problems. The challenges and opportunities for solving ridesharing problems with optimization models can be found in [8,9]. A review on variants of shared mobility, problems and solution approaches is available in [10].
In addition to the issues of optimizing total cost savings or total travel distance [11], there are recent works on other issues in ridesharing systems. For example, to promote ridesharing, the optimization of monetary issues and non-monetary issues in ridesharing systems has been studied. As mentioned in [12,13,14], dealing with these issues often requires the modeling of more complex constraints that are highly nonlinear. These constraints may lead to a more complex solution space and make it difficult to find a solution for the problem. Therefore, these monetary issues and non-monetary issues in ridesharing systems pose new challenges in the development of effective methods to solve relevant ridesharing problems.
One prominent financial benefit of a shared mobility mode such as ridesharing is cost savings. For this reason, a lot of studies focus on maximization of cost savings in shared mobility systems. Cost savings provide an incentive for riders to adopt ridesharing. However, if cost savings are not properly allocated to riders or the financial benefit of cost savings is not sufficient to attract riders to use ridesharing mode, riders will not accept ridesharing mode even if the overall cost savings is significant [15,16]. In a recent study [13], the concept of discount-guaranteed ridesharing has been proposed to provide an incentive for riders to accept ridesharing services through ensuring a minimal discount for drivers and passengers. In [13], several algorithms have been applied to solve the Discount-Guaranteed Ridesharing Problem (DGRP). With the advances in computing technology, it is possible to develop a more effective algorithm to solve the problem. In this study, we will propose an algorithm to improve the performance of the discount-guaranteed ridesharing systems and the convergence rate to find a solution for the DGRP. Neighborhood search has been recognized as an effective mechanism to improve solutions in an evolutionary computation approach. The concept of self-adaptation has been widely used in meta-heuristic algorithms to identify better search strategies through learning and to apply them in the solution-finding processes to improve convergence rate. In this paper, a success rate-based self-adaptation mechanism and neighborhood search are used jointly to develop an effective algorithm to solve the DGRP.
One of the challenges in solving the DGRP arises from the large number of constraints and discrete decision variables. To tackle the constraints effectively, we adopt a method that discriminates feasible regions from infeasible regions in the solution space [17] by designing a proper fitness function. To deal with the discrete decision variables, we use a transformation approach that transforms the real values of decision variables into discrete values. The contributions of this paper include the development of a new self-adaptive neighborhood search algorithm for solving the DGRP and the assessment of its effectiveness by comparing with existing methods.
The rest of this paper is organized as follows. In Section 2, we will provide a literature review of ridesharing problems and relevant solution methods. We will present the problem formulation and the model of the DGRP in Section 3. In Section 4, the details about the development of a solution algorithm based on self-adaptation and neighborhood search will be presented. In Section 5, the results obtained by applying the proposed algorithm and other competitive algorithms will be presented. We will discuss the results of experiments and conclude this study in Section 6.

2. Literature Review

In this section, we briefly review existing studies relevant to this paper. As we concentrate on the development of an effective algorithm for the DGRP based on success rate self-adaptation and neighborhood search, the papers reviewed in this section include two categories: papers related to ridesharing literature and papers relevant to self-adaptation and neighborhood search in an evolutionary computation approach. Papers related to ridesharing will be introduced first. Papers related to self-adaptation and neighborhood search in an evolutionary computation approach will be introduced next.
One of the sustainable development goals is to promote emerging paradigms to mitigate global warming by reducing the consumption of energies, greenhouse gas emissions and negative impact to the environment. With the global trend to achieve this goal, several transport modes such as ridesharing, car-sharing and bike-sharing have appeared in the transportation sector in the past two decades under the sharing economy. As one of the most important transport modes for shared mobility, ridesharing makes it possible for passengers and drivers with similar itineraries to share rides and enjoy cost savings. For a comprehensive survey of ridesharing literature, please refer to [6,7,18].
Although ridesharing is one of the transport modes with the most potential to achieve shared economy, there are still barriers and challenges for its acceptance by the general public. For example, the lack of trust in ridesharing is one factor that hinders users’ acceptance of ridesharing [14]. Several studies have been done on the barriers to the acceptance of ridesharing. The acceptance of ridesharing mode is influenced by several monetary factors and non-monetary factors. Monetary factors for the acceptance of ridesharing are directly related to the financial benefits due to cost savings [12]. Non-monetary factors are directly related to the safety and comfortability of ridesharing such as trust, enjoyability and social awareness. For example, the trust issue in ridesharing has been studied in [14]. In particular, providing a monetary incentive is essential for the acceptance of ridesharing. In this study, we propose a scheme to provide a monetary incentive for ridesharing participants.
As cost savings is recognized as one of the most prominent benefits from ridesharing, the objective of the ridesharing problem considered in most studies is to maximize overall cost savings or minimize the overall travel costs while meeting the transportation requirements of riders and drivers [11]. However, individual ridesharing participants may not enjoy the benefits of cost savings even if overall cost savings have been maximized or the overall travel costs have been minimized. To make individual ridesharing participants enjoy the benefits of cost savings, the overall cost savings must be allocated to individual ridesharing participants properly such that the benefit of cost savings is sufficient for ridesharing participants to accept ridesharing.
In [12], a problem formulation has been proposed to maximize the overall rewarding ratio. However, there is no guarantee that the minimal rewarding rate can be guaranteed even if the overall cost savings is maximized [13]. In [13], a problem formulation and associated solution methods are proposed to ensure that the rewarding rate can be guaranteed. However, scalability of the algorithms was not studied. In this study, we will propose a new algorithm for the DGRP formulated in [13] to improve the performance and convergence rate to guarantee satisfaction of the rewarding rate for ridesharing participants.
As the DGRP is a typical integer programming problem in which the decision variables are binary, the complexity of the problem grows exponentially with the problem size. Exact optimization approaches are computationally feasible only for small instances due to the exponential growth of the solution space as the instances grow. Therefore, approximate optimization approaches will be adopted to solve the decision problem. In the past decades, a lot of evolutionary algorithms were proposed to find solutions for complex optimization problems. These include the Genetic Algorithm [19], Particle Swarm Optimization algorithm [20], Firefly algorithm [21] and metaheuristic algorithms such as the Differential Evolution algorithm [22]. In the literature, a wide variety of variants in the Genetic Algorithm, Particle Swarm Optimization algorithm, Firefly algorithm and Differential Evolution algorithm can be found in [23,24,25,26], respectively. Although these approaches may be applied to find solutions for optimization problems, their performances vary. The studies of [27,28] show several advantages of the PSO approach over the Genetic Algorithm. The previous study of [29] indicates that the Differential Evolution approach performs better than the Genetic Algorithm. Evolutionary computation approaches such as PSO or DE algorithms are well-known metaheuristic algorithms. A metaheuristic algorithm refers to a higher-level procedure that generates or selects a heuristic to find a good solution to an optimization problem. In [30,31,32,33,34,35,36], several adaptive Differential Evolution algorithms have been proposed to solve optimization problems.
The goal of this study is to propose a more effective solution algorithm to improve the performance of the DGRP. In this study, we will combine a Differential Evolution approach with a success rate self-adaptation mechanism to develop a solution algorithm for the DGRP. The characteristics of the DGRP are different from the problems addressed in [30,31,32,33,34,35,36] as the decision variables of the DGRP are discrete whereas the decision variables of the problems studied in [30,31,32,33,34,35,36] are continuous real values. In this paper, the self-adaptation mechanism of [37] and the concept of the neighborhood search of [38] are applied jointly to develop an effective problem solver. Note that the self-adaptation mechanism of [37] and the neighborhood search concept of [38] are originally proposed for a continuous solution space. This study will verify effectiveness of combining the self-adaptation mechanism and neighborhood search mechanism for problems with a large number of constraints and discrete decision variables.
The problem addressed in this paper is the DGRP, which was formulated in [13]. This paper is different from the previous work [13] in that the proposed success rate-based self-adaptive metaheuristic algorithm is different from the ten algorithms proposed in [13]. The contribution of this paper is to propose a novel self-adaptive algorithm to improve the performance and convergence rate of discount-guaranteed ridesharing systems. We verified the effectiveness of the self-adaptive algorithm by conducting experiments. The results indicated that the proposed method improves the performance of the solution and convergence rate for finding the solution. Although the algorithm proposed in this paper is designed for the DGRP, it can be applied to other optimization problems. For example, the work reported in [14] also applied a similar approach to another instance of a trust-based ridesharing problem.

3. The Formulation of the DGRP

In this section, we will present the formulation of the DGRP based on a combinatorial double auction mechanism [39]. The variables, parameters and symbols used in this paper are listed in Table 1. We first briefly introduce the combinatorial double auction model and then formulate the DGRP based on the combinatorial double auction model.

3.1. An Auction Model for Ridesharing Systems

Just like buyers and sellers who trade goods in a traditional marketplace, the functions and operations of a ridesharing system are similar to a traditional marketplace. In a traditional marketplace, buyers purchase goods according to their need and sellers recommend goods based on the available items in stock. In a ridesharing system, individual passengers with transportation requirements are on the demand side. Individual drivers also have their transportation requirements and constraints. Individual drivers are on the supply side. The roles of passengers and drivers in a ridesharing system are similar to buyers and sellers in a traditional marketplace. Therefore, a ridesharing system can be modeled as a virtual “marketplace” in which potential passengers and drivers seek to find an opportunity for ridesharing. Auctions are a proper business model that can be applied to trade goods in a marketplace in which the price of goods is not fixed and is determined by buyers and sellers. They can also be applied to determine the passengers and drivers for ridesharing in ridesharing systems.
In the literature, a variety of auction models have been proposed and applied in different application scenarios. Depending on the number of buyers and sellers in an auction, auctions can be classified into two categories: single-side auctions and double auctions. There are two types of single-side auctions: (1) single seller and multiple buyers and (2) single buyer and multiple sellers. In a double auction, there are multiple buyers and multiple sellers. If there are multiple types of goods for trading in a double auction, buyers and sellers can purchase or sell a combination of goods in the auction. This type of double auction is called a combinatorial double auction.
For an auction scenario with multiple buyers and multiple sellers to trade multiple types of goods, although one can apply either multiple single-side auctions or one combinatorial double auction, the combinatorial double auction is more effective in terms of efficiency. Therefore, we apply the combinatorial double auction model to determine the passengers and drivers for ridesharing in ridesharing systems. There are three types of roles in a typical combinatorial double auction for trading goods: buyers, sellers and the auctioneer. In a ridesharing system modeled with a combinatorial double auction, there are three types of roles: passengers, drivers and the ridesharing information provider. The ridesharing information provider acts as the auctioneer and provides a ridesharing system to process the requests from the passengers and drivers.

3.2. A Formulation of the DGRP Based on Combinatorial Double Auctions

A passenger expresses his/her transportation requirements by sending a request to the ridesharing system provided by the ridesharing information provider. A driver expresses his/her transportation requirements by sending a request to the ridesharing system to indicate his/her transportation requirements and constraints. The ridesharing system must determine the passengers and drivers for ridesharing. In a combinatorial double auction model, buyers and sellers who place the winning bids are called winners. In a ridesharing system, each passenger and each driver on a shared ride determined by the ridesharing system are called winners.
The request submitted by a passenger takes the following form: R p = ( L o p , L e p , ω p e , ω p l , n p ) , which includes the passenger p ’s start location, L o p , end location, L e p , earliest departure time, ω p e , latest arrival time, ω p l , and requested seats, n p , respectively. The request submitted by a driver takes the following form: R d = ( L o d , L e d , ω d e , ω d l , a d , τ ¯ d , Γ d ) , which includes the driver’s start location, L o d , end location, L e d , earliest departure time, ω d e , latest arrival time, ω d l , available seats, a d , and maximum detour ratio, τ ¯ d . The earliest departure time and the latest arrival time in the request are used in the decision models of most papers on ridesharing. The earliest departure time and the latest arrival time are specified by the ridesharing participant sending the request. The ridesharing system will extract the information from the R p of a passenger to form a bid P B p = ( s p 1 1 , s p 2 1 , s p 3 1 , , s p P 1 , s p 1 2 , s p 2 2 , s p 3 2 , s p P 2 , f p ) , where s p k 1 is the No. of seats requested at pick-up location k of passenger p , s p k 2 is the No. of seats released at drop-off location k of passenger p and f p is passenger p ’s original cost without ridesharing. The ridesharing system will extract the information from R d of a driver to form a bid D B d j = ( q d j 1 1 , q d j 2 1 , q d j 3 1 , , q d j P 1 , q d j 1 2 , q d j 2 2 , q d j 3 2 , , q d j P 2 , o d j , c d j ) , where q d j p 1 is the No. of seats allocated at the pick-up location k of passenger p , q d j p 2 is the No. of seats released at the drop-off location k of passenger p , o d j is the original cost of the driver when he/she travels alone and c d j is the travel cost of the bid.
The DGRP to be formulated takes into account several factors: balance between demand and supply, the non-negativity of surplus, a maximum of one winning bid for each driver, minimal rewarding rate for drivers and minimal rewarding rate for passengers based on the bids submitted by passengers, P B p p { 1 , 2 , 3 , P } and the bids submitted by D B d j   d { 1 , 2 , , D } , j { 1 , 2 , , J d } , submitted by drivers.
The surplus or total cost savings is F ( x , y ) = ( p = 1 P y p ( f p ) ) ( d = 1 D j = 1 J d x d j ( c d j o d j ) ) . The objective function is described in (1). Constraint (2) and (3) describe balance between demand and supply of seats in ridesharing vehicles. To benefit from ridesharing, the non-negativity of surplus (cost savings) described by Constraint (4) must be satisfied. A driver may submit multiple bids, a maximum of one bid can be a winning bid for each driver. This constraint is described by Constraint (5). To attract individual drivers to take part in ridesharing, Constraint (6) enforces the satisfaction of the minimal rewarding rate for drivers. To provide incentives for individual passengers to accept ridesharing, Constraint (7) enforces the satisfaction of the minimal rewarding rate for passengers. The constraint that all decision variables must be binary is described by Constraint (8).
Based on the objective function (1) and the constraints defined by Constraint (2) through (8), the DGRP is formulated as an integer programming problem as follows.
Problem Formulation of the DGRP
max x , y   F ( x , y )
d = 1 D j = 1 J d x d j q d j k 1 = y p s p k 1 p { 1 , 2 , , P }   k { 1 , 2 , , P }  
d = 1 D j = 1 J d x d j q d j k 2 = y p s p k 2 p { 1 , 2 , , P }   k { 1 , 2 , , P }  
p = 1 P y p f p + d = 1 D j = 1 J d x d j o d j d = 1 D j = 1 J d x d j c d j
j = 1 J d x d j 1 d { 1 , , D }
x d j ( F d j ( x , y ) p Γ d j y p c f p d j + x d j c d j   r D ) 0
y p ( F d j ( x , y ) p Γ d j y p c f p d j + x d j c d j   r P ) 0
x d j { 0 , 1 } d { 1 , , D }   j { 1 , , J d }   and   y p { 0 , 1 } p { 1 , 2 , , P }
The determination of the ridesharing decisions is not solely based on price and locations, the model also considers the constraint that the minimal rewarding rate for drivers and passengers must be satisfied. As we focus on comparison with [13], we use the same model as the one used in [14]. Factors other than price and locations not considered in the model of this paper can be taken into consideration in the future.

4. A Self-Adaptive Meta-Heuristic Algorithm Based on Success Rate and Differential Evolution

The complexity of the DGRP is due to two characteristics: (1) discrete decision variables and (2) a large number of constraints. For these reasons, the development of an effective solution algorithm for the DGRP relies on a method to ensure values of the decision variables are discrete and a method to enforce the evolution processes to guide the candidate solutions in the population to move toward a feasible solution space. For the former, we use a function to systematically map the continuous values of decision variables to discrete values in the evolution processes. For the latter, a fitness function is used in this paper to provide a direction to improve solution quality by reducing the violation of constraints in the solution-finding processes. In this section, we first briefly describe the details of the methods to convert continuous values of decision variables to discrete values and the fitness function to guide the candidate solutions in the population to move toward feasible solution space, as mentioned. We then present the proposed algorithm.

4.1. The Conversion of Decision Variables and Fitness Function

We define a conversion function to ensure the values of the decision variables are discrete. The function C o n v e r t 2 B i n a r y in (9) through (15) is used in our solution algorithm to map the continuous values of decision variables to discrete values in the evolution processes. This procedure makes it possible to adapt existing evolutionary algorithms that were originally proposed for problems with a continuous solution space to work for problems with a discrete solution space.
Function   C o n v e r t   2   B i n a r y
Input:   u
Output:   u ¯
Step   1 :   v = { V max i f   u > V max u   i f   V max u V max V max   i f   u < V max
Step   2 :   s ( v ) = 1 1 + exp v
Step   3 :   Generate   a   random   variable   r s i d   with   uniform   distribution   U ( 0 , 1 ) u ¯ = { 1   r s i d < s ( v ) 0   o t h e r w i s e
Step   4 :   return   u ¯
To provide a direction for an evolutionary algorithm to improve solution quality by reducing the violation of constraints in the solution finding processes, we define the set of feasible solutions in the current population as S f and use S f min = min ( x , y ) S f F ( x , y ) to denote the objective function value of the worst feasible solution in S f . We introduce the following fitness function.
The fitness function F 1 ( x , y ) for the penalty method is defined in (16):
F 1 ( x , y ) = { F ( x , y ) i f   ( x , y )   i s   f e a s i b l e U ( x , y ) o t h e r w i s e ,
where U ( x , y ) is defined in (17).
U ( x , y ) = S f min ( p = 1 P k = 1 K ( | d = 1 D j = 1 J d x d j q d j k \ 1 y p s p k 1 | + | d = 1 D j = 1 J d x d j q d j k 2 y p s p k 2 | ) ) + min ( p = 1 P y p f p d = 1 D j = 1 J d x d j ( c d j o d j ) , 0.0 ) + d = 1 D j = 1 J d min ( 1 j = 1 J d x d j , 0.0 ) + d = 1 D j = 1 J d x d j min ( ( F d j ( x , y ) p Γ d j y p c f p d j + x d j c d j   ) r D , 0.0 ) + p = 1 P y p min ( ( F d j ( x , y ) p Γ d j y p c f p d j + x d j c d j   ) r P , 0.0 )
In (17), we define the penalty function U ( x , y ) to penalize violation of constraints. The terms | d = 1 D j = 1 J d x d j q d j k \ 1 y p s p k 1 | and | d = 1 D j = 1 J d x d j q d j k 2 y p s p k 2 | correspond to penalty due to the violation of Constraints (2) and (3), respectively. The term min ( p = 1 P y p f p d = 1 D j = 1 J d x d j ( c d j o d j ) , 0.0 ) corresponds to penalty due to the violation of Constraint (4). The term d = 1 D j = 1 J d min ( 1 j = 1 J d x d j , 0.0 ) corresponds to penalty due to the violation of Constraint (5). The terms d = 1 D j = 1 J d x d j min ( ( F d j ( x , y ) p Γ d j y p c f p d j + x d j c d j   ) r D , 0.0 ) and p = 1 P y p min ( ( F d j ( x , y ) p Γ d j y p c f p d j + x d j c d j   ) r P , 0.0 ) correspond to penalties due to the violation of Constraints (6) and (7), respectively.

4.2. The Proposed Success Rate-Based Self-Adaptive Metaheuristic Algorithm

Based on the conversion function and the fitness function defined above, we introduce the proposed algorithm as follows. Instead of using one single mutation strategy, we use two different mutation strategies and adopt a self-adaptation mechanism to select the best strategy for improving the performance. The two different mutation strategies are DE-1 and DE-6, which are two well-known mutation strategies. Therefore, the self-adaptive metaheuristic algorithm is referred to as SaNSDE(DE1, DE6) or SaNSDE-1-6 in this paper for simplicity. The self-adaptation mechanism used by SaNSDE-1-6 keeps track of the number of times that a mutation strategy successfully improves the performance and calculates the success rate of each mutation strategy. A strategy selection index for a mutation strategy is calculated by dividing the success rate of the mutation strategy with the sum of success rate for all mutation strategies. The strategy selection index is used to select one mutation strategy used in the solution-finding processes.
Let N be the problem dimension. To describe a mutation strategy, we use Z g b n = ( z g b n ) to denote the value of the n -th dimension of the best individual in the population of the g -th generation. We use z g r 1 n , z g r 2 n , z g r 3 n and z g r 4 n to denote four individuals randomly selected from the current population. In this paper, we use the two strategies defined in (18) and (19) to design the proposed success rate-based self-adaptive metaheuristic algorithm. The n -th dimension of the mutant vector v ( g + 1 ) i n of the i -th individual in the population of the ( g + 1 ) -th generation is calculated either by (18) or by (19), depending on the success rates of the two strategies. The flow chart of the success rate-based self-adaptive metaheuristic algorithm is shown in Figure 1.
v ( g + 1 ) i n = z g r 1 n + F i ( z g r 2 n z g r 3 n )
v ( g + 1 ) i n = z g i n + F i ( z g b n z g i n ) + F i ( z g r 1 n z g r 2 n ) + F i ( z g r 3 n z g r 4 n )
As we use two mutation strategies, a mutation strategy is referred to as s , where s { 1 , 2 } . In the proposed algorithm, the number of times that a mutation strategy s successfully improves the performance is stored in variable S s . The number of times that a mutation strategy s fails to improve the performance is stored in variable U s . The success rate of strategy s is w s = S s S s + U s , where s { 1 , 2 } . The parameter f p used to select the probability distribution to generate the scale factor and select the mutation strategy is calculated by f p = w 1 w 1 + w 2 . A list L is used to store the crossover probability c r i that successfully improves performance by executing the statement L L { c r i } . The list L is used to update the parameter c r by c r = k { 1 , 2 , , | L | } L ( k ) | L | , which is used to generate the crossover probability c r i in the next generation.
The discrete self-adaptive metaheuristic algorithm based on success rate and differential evolution is listed in Algorithm 1.
Algorithm 1: Discrete Self-Adaptive Metaheuristic Algorithm based on Success Rate and Differential Evolution
Step 1: Initialize the parameters and population of individuals
Step 1-1: Initial the parameters
     c r = 0.5
     f p = 0.5
Step 1-2: Generate a population with N P individuals randomly
Step 2: Evolve solutions
    For g = 1   t o   G
     For i = 1   t o   N P
Step 2-1: Generate a uniform random number r from uniform distribution U ( 0 , 1 ) ranging from 0 to 1
      F i = { r 1 ,   w h e r e   r 1   i s   a   G a u s s i a n     r a n d o m     n u m b e r   w i t h   N ( μ , σ 1 2 )   i f   r < f p r 2 ,   w h e r e   r 2   i s   a   u n i f o r m   r a n d o m     n u m b e r   s a m p l e d   f r o m   U ( 0 , 1 )   o t h e r w i s e
Step 2-2: Generate a uniform random number r from uniform distribution U ( 0 , 1 ) ranging from 0 to 1
     Calculate the mutant vector v g i as follows.
     For n 1, 2, …, N
       v ( g + 1 ) i n = { z g r 1 n + F i ( z g r 2 n z g r 3 n )   i f   r < f p v ( g + 1 ) i n = z g i n + F i ( z g b n z g i n ) + F i ( z g r 1 n z g r 2 n ) + F i ( z g r 3 n z g r 4 n )   o t h e r w i s e
       s = { 1   i f   r < f p 2   o t h e r w i s e
     End For
Step 2-3: Generate a trial vector u g i
     Generate a Gaussian random number c r i with distribution N ( c r , σ 2 2 )
     For l 1, 2, …, N
      Generate a uniform random number r from uniform distribution U ( 0 , 1 ) ranging from 0 to 1
       u g i l = { v g i l   i f   r < c r i z g i l   o t h e r w i s e
       u ¯ g i l C o n v e r t 2 B i n a r y ( u g i l )
     End For
Step 2-4: Update the individual and success/failure counters
     If H 1 ( u ¯ g i ) H 1 ( z g i )
       z ( g + 1 ) i = u g i
       L L { c r i }
       S s = S s + 1
     Else
       U s = U s + 1
     End If
    End For
Step 2-5: Update the parameters as needed
     If g > L P
       w 1 = S 1 ( S 1 + U 1 )
       w 2 = S 2 ( S 2 + U 2 )
       f p = w 1 w 1 + w 2
       c r = k { 1 , 2 , , | L | } L ( k ) | L |
     End If
    End For

5. Results

As the goal of this paper is to improve the performance of the quality of solutions for the DGRP and improve the convergence rate (the number of generations) for finding the best solutions, verification by the results of experiments is needed to demonstrate the advantage of the proposed algorithm. In this section, the results of experiments obtained by applying the algorithm developed in this paper will be analyzed. Our analysis focuses on two algorithmic properties: performance and convergence rate.
The evaluation process of the algorithms can be divided into five steps. The first step is to select the performance metrics for comparing different algorithms, the second step is to create instances for the DGRP, the third step is to set the parameters for different algorithms, the fourth step is to apply different algorithms to solve each instance of the DGRP and the fifth step is to calculate the performance metrics under consideration based on the results of experiments and compare all algorithms. For the first step, the performance metrics for comparing different algorithms include the average fitness function values, the average number of generations to find the best solutions and the average computation time to find the best solutions. For the second step, the locations of drivers and passengers are randomly generated based on a selected geographical area in Taichung City, which is located in the central part of Taiwan. The number of drivers and the number of passengers are increased gradually to generate instances of the DGRP with different size. For the third step, the parameters for PSO, NSDE, DE-1 and DE-3 are the same as the ones used in [13]. The parameters for SaNSDE-1-6 are specified later in this section. For the fourth step, we apply SaNSDE-1-6 ten times to solve each instance of the DGRP. As the results of applying PSO, NSDE, DE-1 and DE-3 to Case 1 through Case 10 are available in [13], we apply PSO, NSDE, DE-1 and DE-3 ten times to solve to Case 11 through Case 14. For the fifth step, we first calculate the average fitness function values, the average number of generations to find the best solutions and the average computation time to find the best solutions based on the results obtained. We then compare all algorithms based on the performance metrics mentioned above.
In [13], ten algorithms were developed to solve the DGRP. The study of [13] indicates that the NSDE, DE-1, DE-3 and PSO are the top four solvers among the ten algorithms for solving the DGRP in terms of performance and convergence rate (the number of generations to find the best solutions).
To illustrate effectiveness of the algorithm proposed, the experiments include Test Case 1–10 (available at [40]) used in [13] and Test Case 11–14 (available at [41]) to compare with the existing algorithms for the DGRP. To illustrate superiority of the algorithm proposed in terms of scalability with respect to problem size, we generated several test cases by increasing the problem size. We conducted these additional test cases by applying the algorithm proposed in this paper and the best four algorithms reported in [13]. We analyzed by comparing the results obtained by applying all of these algorithms to study the performance and convergence rate of these algorithms as problems grow.
As the effectiveness of evolutionary algorithms depends on the population size parameter, we conducted two series of experiments. The population size parameter of the first series of experiments is 30. The population size parameter of the second series of experiments is 50. The values of algorithmic parameters used by each algorithm are listed in Table 2. The number of generations parameter used by each algorithm is set to 1000 for Test Case 1 through Test Case 10. The number of generations parameter used by each algorithm is set to 50,000 for Test Case 11 through Test Case 14.
Experiments based on the parameters in Table 2 for N P = 30 were performed. The results were summarized in Table 3 and Table 4 for N P = 30. Table 3 shows the average fitness function value and Table 4 shows the average number of iterations to find the best solutions.
The results in Table 3 show that the top four algorithms are SaNSDE-1-6, NSDE, DE-1 and DE-3. For small test cases, including Case 1 through Case 11, the fitness function values obtained using SaNSDE-1-6, NSDE, DE-1 and DE-3 are the same. However, as the problem size grows, the average fitness function values obtained using SaNSDE-1-6 are significantly better than those obtained using NSDE, DE-1 and DE-3. For Case 12, the average fitness function value obtained using SaNSDE-1-6 is better than those obtained using NSDE, DE-1 and DE-3. The differences between the average fitness function value obtained using SaNSDE-1-6 and those obtained using NSDE, DE-1 and DE-3 are about 1% to 2%. For Case 12, the average fitness function value obtained using SaNSDE-1-6 is better than those obtained using NSDE, DE-1 and DE-3. For Case 13, the differences between the average fitness function value obtained using SaNSDE-1-6 and those obtained using NSDE, DE-1 and DE-3 are about 5% to 10%. For Case 14, the differences between the average fitness function value obtained using SaNSDE-1-6 and those obtained using NSDE, DE-1 and DE-3 are about 10% to 28%. In short, SaNSDE-1-6 outperforms NSDE, DE-1 and DE-3 in terms of scalability. To compare performance clearly, please refer to the bar chart shown in Figure 2 for the average fitness function values of Case 1 through Case 14.
In terms of convergence rate (the number of generations to find the best solutions), the results in Table 4 indicate that the average numbers of iterations for SaNSDE-1-6 to find the best solutions are significantly less than those for NSDE, DE-1 and DE-3 to find the best solutions for most test cases (with some exceptions). This indicates that SaNSDE-1-6 outperforms NSDE, DE-1 and DE-3 in terms of convergence rate. To compare the convergence rate clearly, please refer to the bar chart shown in Figure 3 for the average number of generations of Case 1 through Case 10 and please refer to the bar chart shown in Figure 4 for the average number of generations of Case 11 through Case 14.
To verify the convergence rate for POP = 30, we show the results of several runs of Case 5, Case 11, Case 12, Case 13 and Case 14 in Figure 5, Figure 6, Figure 7, Figure 8 and Figure 9, respectively.
The results presented above are based on a comparison of the average number of generations. For the comparison of computation time, the results in Table 5 indicate that the average computation time for SaNSDE-1-6 to find the best solutions is significantly less than that for PSO to find the best solutions for Case 1 through Case 9 and is greater than those of NSDE, DE-1 and DE-3 for Case 1 through Case 10. This indicates that SaNSDE-1-6 outperforms PSO in terms of computation time for Case 1 through Case 9 and NSDE, DE-1 and DE-3 outperform SaNSDE-1-6 in terms of computation time for Case 1 through Case 10. For Case 11, SaNSDE-1-6 outperforms PSO, NSDE, DE-1 and DE-3 in terms of computation time. For bigger cases, Case 12 through Case 14, PSO, NSDE, DE-1 and DE-3 outperform SaNSDE-1-6 in terms of computation time. As the experiments were done on the same platform as the one used in [13], which was an old laptop delivered in 2019 with Intel(R) Core(TM) i7 CPU, base clock speed of 2.6 GHz and16 GB of onboard memory, to compare different algorithms, the computation times of SaNSDE-1-6 are much longer for Case 12, Case 13 and Case 14. Obviously, a more powerful computer or a server class computer is required to apply the SaNSDE-1-6 algorithm.
Experiments based on the parameters in Table 2 for N P = 50 were performed. The results were summarized in Table 6 and Table 7 for N P = 50. Table 6 shows the average fitness function values and Table 7 shows the average number of iterations to find the best solutions.
The results in Table 6 show that the top four algorithms are SaNSDE-1-6, NSDE, DE-1 and DE-3. For small test cases, including Case 1 through Case 11, the fitness function values obtained using SaNSDE-1-6, NSDE, DE-1 and DE-3 are the same. However, as the problem size grows, the average fitness function values obtained via SaNSDE-1-6 are significantly better than those obtained via NSDE, DE-1 and DE-3. For Case 12, the average fitness function value obtained via SaNSDE-1-6 is better than those obtained by NSDE, DE-1 and DE-3. The differences between the average fitness function value obtained via SaNSDE-1-6 and those obtained via NSDE, DE-1 and DE-3 are about 0.1674% to 0.73959%. For Case 12, the average fitness function value obtained via SaNSDE-1-6 is better than those obtained via NSDE, DE-1 and DE-3. For Case 13, the differences between the average fitness function values obtained via SaNSDE-1-6 and those obtained via NSDE, DE-1 and DE-3 are about 4.00326% to 7.9796%. For Case 14, the differences between the average fitness function value obtained via SaNSDE-1-6 and those obtained via NSDE, DE-1 and DE-3 are about 3.1171% to 46.403%. In short, SaNSDE-1-6 outperforms NSDE, DE-1 and DE-3 in terms of scalability. To compare performance clearly, please refer to the bar chart shown in Figure 10 for the average fitness function values of Case 1 through Case 14.
In terms of convergence rate (the number of generations to find the best solutions), the results in Table 7 indicate that the average numbers of iterations for SaNSDE-1-6 to find the best solutions are significantly less than those for NSDE, DE-1 and DE-3 to find the best solutions for most test cases (with some exception). This indicates that SaNSDE-1-6 outperforms NSDE, DE-1 and DE-3 in convergence rate. To compare the convergence rate clearly, please refer to the bar chart shown in Figure 11 for the average number of generations of Case 1 through Case 10 and please refer to the bar chart shown in Figure 12 for the average number of generations of Case 11 through Case 14.
To verify the convergence rate for POP = 50, we show the results of several runs of Case 5, Case 11, Case 12, Case 13 and Case 14 in Figure 13, Figure 14, Figure 15, Figure 16 and Figure 17, respectively.
The results presented above are based on comparison of average number of generations. The results in Table 8 indicate that the average computation time for SaNSDE-1-6 to find the best solutions is significantly less than that for PSO to find the best solutions for Case 1 through Case 10 and is greater than those of NSDE, DE-1 and DE-3 for Case 1 through Case 10. This indicates that SaNSDE-1-6 outperforms PSO in terms of computation time for Case 1 through Case 10 and NSDE, DE-1 and DE-3 outperform SaNSDE-1-6 in terms of computation time for Case 1 through Case 10. For Case 11 and Case 12, SaNSDE-1-6 outperforms PSO, NSDE, DE-1 and DE-3 in terms of computation time. For Case 13 through Case 14, PSO, NSDE, DE-1 and DE-3 outperform SaNSDE-1-6 in terms of computation time. As the experiments to compare the different algorithms were done on the same platform as the one used in [13], which was an old laptop delivered in 2019 with Intel(R) Core(TM) i7 CPU, base clock speed of 2.6 GHz and16 GB of onboard memory, the computation times of SaNSDE-1-6 are much longer for Case 12, Case 13 and Case 14. Obviously, a more powerful computer or a server class computer is required to apply the SaNSDE-1-6 algorithm.

6. Discussion and Conclusions

In this paper, we applied the self-adaptation concept to develop an algorithm to improve the performance in finding solutions for the DGRP formulated in the previous study. The self-adaptation mechanism used in this paper attempts to identify a better strategy that can be selected in the future as the strategy for mutation with a higher probability. To identify a better strategy and the probability for serving as a mutation strategy in the future, the algorithm records the number of “success events” and the number of “failure events” in a learning period. The probability for serving as the mutation strategy is calculated based on the number of “success events” and the number of “failure events” in a learning period for each mutation strategy. A mutation strategy with a higher probability for serving as the mutation strategy will be selected with a higher probability. A mutation strategy with a lower probability for serving as the mutation strategy will be selected with a lower probability. In this way, the performance of the solution that is found can be improved more efficiently in terms of the average number of generations for most cases. However, due to the additional computation in each iteration, the computation time of SaNSDE-1-6 is much longer for big cases.
A mutation strategy with a higher probability for serving as the mutation strategy indicates that the ratio between the number of “success events” and the total number of “success events” and “failure events” is higher. It is expected that using a mutation strategy with a higher probability for serving as the mutation strategy tends to improve the performance of the solution that is found. The results presented in the previous section confirm that using a more effective mutation strategy with a higher probability for serving as the mutation strategy indeed improves the performance of the solution that is found significantly. The degree of improvement is case dependent. With N P = 30, for Case 12, the improvement achieved using SaNSDE-1-6 is about 1% to 2%. For Case 13, the improvement achieved using SaNSDE-1-6 is about 5% to 10%. For Case 14, the improvement achieved using SaNSDE-1-6 is about 10% to 28%. In short, SaNSDE-1-6 outperforms NSDE, DE-1 and DE-3 in terms of scalability. With N P = 50, for Case 12, the improvement achieved using SaNSDE-1-6 is about 0.1674% to 0.73959%. For Case 13, the improvement achieved using SaNSDE-1-6 is about 4.00326% to 7.9796%. For Case 14, the improvement achieved using SaNSDE-1-6 is about 3.1171% to 46.403%. In short, SaNSDE-1-6 outperforms NSDE, DE-1 and DE-3 in terms of scalability. The bigger the problem size, the more significant the improvement.
In the real world, when one person fails to solve a problem alone, it might be easier to solve the problem by asking another person for help and working together. The reason is that one may consult the other and/or help each other when taking actions or making decisions. This way to solve a problem effectively is commonly used in our daily life. The results of the experiments presented in this paper are consistent with the abovementioned phenomena in the real world. In our self-adaptation mechanism, there are two strategies involved in the solution-finding processes. The selection of one strategy in the solution-searching processes is based on the success probability learned from the learning period. To verify the effectiveness of the self-adaptation mechanism, we carried out experiments by applying several standard algorithms and our proposed algorithm. Two different population sizes were used to perform the experiments. We compared the effectiveness of several single strategy algorithms and the self-adaptation-based algorithm. Our results indicate that the proposed algorithm based on the self-adaptation mechanism improves the performance and convergence rate in terms of the average number of generations required for finding the solutions for most cases. Although our proposed algorithm outperforms all of the other four algorithms in terms of performance and convergence rate for most cases, the computation time of the proposed algorithm is much longer for several big cases due to the additional computation in each iteration. The results of this study have two implications. First, the performance in solving the DGRP with two strategies and a self-adaptation mechanism is better than with one strategy. Second, although the performance in solving the DGRP can be improved and the average number of generations required for finding the solution is reduced, the computation time of the proposed algorithm is much longer than all of the other four algorithms for bigger instances. This implies that either a more powerful computer or a proper divide-and–conquer strategy to divide a big instance of the DGRP into small ones must be used before applying the proposed algorithm. The computational experience showing that the proposed self-adaptive algorithm outperforms the other four algorithms for the test cases in this paper sparks an interesting research question: does the proposed self-adaptive algorithm outperform the other four algorithms? This research question requires further study in the comparative analysis of the proposed algorithm. A comparative analysis of the algorithms studied in this paper for specific performance indicators is a challenging future research direction. Studies of other performance evaluation indicators for the proposed algorithm are another interesting future research directions. The other interesting future research direction is to extend the success rate-based self-adaptive scheme proposed in this study to other evolutionary approaches.

Funding

This research was supported in part by the National Science and Technology Council, Taiwan, under Grant NSTC 111-2410-H-324-003.

Data Availability Statement

Data available in a publicly accessible repository described in the article.

Conflicts of Interest

The author declares no conflicts of interest.

References

  1. Bruglieri, M.; Ciccarelli, D.; Colorni, A.; Luè, A. PoliUniPool: A carpooling system for universities. Procedia-Soc. Behav. Sci. 2011, 20, 558–567. [Google Scholar] [CrossRef]
  2. Hwang, K.; Giuliano, G. The Determinants of Ridesharing: Literature Review. Working Paper UCTC No. 38, The University of California Transportation Center. 1990. Available online: https://escholarship.org/uc/item/3r91r3r4 (accessed on 29 November 2023).
  3. Uber. Available online: https://www.uber.com (accessed on 29 November 2023).
  4. Lyft. Available online: https://www.lyft.com (accessed on 29 November 2023).
  5. BlaBlaCar. Available online: https://www.blablacar.com (accessed on 29 November 2023).
  6. Agatz, N.; Erera, A.; Savelsbergh, M.; Wang, X. Optimization for dynamic ride-sharing: A review. Eur. J. Oper. Res. 2012, 223, 295–303. [Google Scholar] [CrossRef]
  7. Furuhata, M.; Dessouky, M.; Ordóñez, F.; Brunet, M.; Wang, X.; Koenig, S. Ridesharing: The state-of-the-art and future direc-tions. Transp. Res. Part B Methodol. 2013, 57, 28–46. [Google Scholar] [CrossRef]
  8. Mourad, A.; Puchinger, J.; Chu, C. A survey of models and algorithms for optimizing shared mobility. Transp. Res. Part B Methodol. 2019, 123, 323–346. [Google Scholar] [CrossRef]
  9. Martins, L.C.; Torre, R.; Corlu, C.G.; Juan, A.A.; Masmoudi, M.A. Optimizing ride-sharing operations in smart sustainable cities: Challenges and the need for agile algorithms. Comput. Ind. Eng. 2021, 153, 107080. [Google Scholar] [CrossRef]
  10. Ting, K.H.; Lee, L.S.; Pickl, S.; Seow, H.-V. Shared Mobility Problems: A Systematic Review on Types, Variants, Characteristics, and Solution Approaches. Appl. Sci. 2021, 11, 7996. [Google Scholar] [CrossRef]
  11. Hsieh, F.S.; Zhan, F.; Guo, Y. A solution methodology for carpooling systems based on double auctions and cooperative coevolutionary particle swarms. Appl. Intell. 2019, 49, 741–763. [Google Scholar] [CrossRef]
  12. Hsieh, F.S. A Comparative Study of Several Metaheuristic Algorithms to Optimize Monetary Incentive in Ridesharing Systems. ISPRS Int. J. Geo-Inf. 2020, 9, 590. [Google Scholar] [CrossRef]
  13. Hsieh, F.-S. Development and Comparison of Ten Differential-Evolution and Particle Swarm-Optimization Based Algorithms for Discount-Guaranteed Ridesharing Systems. Appl. Sci. 2022, 12, 9544. [Google Scholar] [CrossRef]
  14. Hsieh, F.S. Trust-Based Recommendation for Shared Mobility Systems Based on a Discrete Self-Adaptive Neighborhood Search Differential Evolution Algorithm. Electronics 2022, 11, 776. [Google Scholar] [CrossRef]
  15. Hsieh, F.-S. A Comparison of Three Ridesharing Cost Savings Allocation Schemes Based on the Number of Acceptable Shared Rides. Energies 2021, 14, 6931. [Google Scholar] [CrossRef]
  16. Hsieh, F.-S. Improving Acceptability of Cost Savings Allocation in Ridesharing Systems Based on Analysis of Proportional Methods. Systems 2023, 11, 187. [Google Scholar] [CrossRef]
  17. Deb, K. An efficient constraint handling method for genetic algorithms. Comput. Methods Appl. Mech. Eng. 2000, 186, 311–338. [Google Scholar] [CrossRef]
  18. Hyland, M.; Mahmassani, H.S. Operational benefits and challenges of shared-ride automated mobility-on-demand services. Transp. Res. Part A Policy Pract. 2020, 134, 251–270. [Google Scholar] [CrossRef]
  19. Michalewicz, Z. Genetic Algorithms + Data Structures = Evolution Programs; Springer: New York, NY, USA, 1992. [Google Scholar]
  20. Kennedy, J.; Eberhart, R.C. Particle swarm optimization. In Proceedings of the IEEE International Conference on Neural Networks, Perth, WA, Australia, 27 November–1 December 1995; pp. 1942–1948. [Google Scholar]
  21. Yang, X.S. Firefly algorithms for multimodal optimization. In Stochastic Algorithms: Foundations and Applications. SAGA 2009; Lecture Notes in Computer Science; Springer: Berlin/Heidelberg, Germany, 2009; Volume 5792, pp. 169–178. [Google Scholar]
  22. Storn, R.; Price, K. Differential evolution—A simple and efficient heuristic for global optimization over continuous spaces. J. Global Optim. 1997, 11, 341–359. [Google Scholar] [CrossRef]
  23. Katoch, S.; Chauhan, S.S.; Kumar, V. A review on genetic algorithm: Past, present, and future. Multimed. Tools Appl. 2021, 80, 8091–8126. [Google Scholar] [CrossRef]
  24. Shami, T.M.; El-Saleh, A.A.; Alswaitti, M.; Al-Tashi, Q.; Summakieh, M.A.; Mirjalili, S. Particle Swarm Optimization: A Comprehensive Survey. IEEE Access 2022, 10, 10031–10061. [Google Scholar] [CrossRef]
  25. Li, J.; Wei, X.; Li, B.; Zeng, Z. A survey on firefly algorithms. Neurocomputing 2022, 500, 662–678. [Google Scholar] [CrossRef]
  26. Ahmad, M.F.; Isa, N.A.M.; Lim, W.H.; Ang, K.M. Differential evolution: A recent review based on state-of-the-art works. Alex. Eng. J. 2022, 61, 3831–3872. [Google Scholar] [CrossRef]
  27. Eberhart, R.C.; Shi, Y. Comparison between genetic algorithms and particle swarm optimization. In Evolutionary Programming VII, Proceedings of the 7th International Conference, ep98, San Diego, CA, USA, 25–27 March 1998; Porto, V.W., Saravanan, N., Waagen, D., Eiben, A.E., Eds.; Lecture Notes in Computer Science; Springer: Berlin/Heidelberg, Germany, 1998; Volume 1447, pp. 611–616. [Google Scholar]
  28. Hassan, R.; Cohanim, B.; Weck, O.D. A comparison of particle swarm optimization and the genetic algorithm. In Proceedings of the 46th AIAA/ASME/ASCE/AHS/ASC Structures, Structural Dynamics and Materials Conference, Structures, Structural Dynamics, and Materials and Collocated Conferences, Austin, TX, USA, 18–21 April 2005. [Google Scholar] [CrossRef]
  29. Tušar, T.; Filipič, B. Differential Evolution versus Genetic Algorithms in Multiobjective Optimization. In Evolutionary Multi-Criterion Optimization; Obayashi, S., Deb, K., Poloni, C., Hiroyasu, T., Murata, T., Eds.; Lecture Notes in Computer Science; Springer: Berlin/Heidelberg, Germany, 2007; Volume 4403, pp. 257–271. [Google Scholar]
  30. Qin, A.K.; Suganthan, P.N. Self-adaptive Differential Evolution Algorithm for Numerical Optimization. Proc. IEEE Congr. Evol. Comput. 2005, 2, 1784–1791. [Google Scholar]
  31. Omran, M.G.H.; Salman, A.; Engelbrecht, A.P. Self-adaptive differential evolution. Proc. Comput. Intell. Secur. Lect. Notes Artif. Intell. 2005, 3801, 192–199. [Google Scholar]
  32. Huang, V.L.; Qin, A.K.; Suganthan, P.N. Self-adaptive differential evolution algorithm for constrained real-parameter optimization. In Proceedings of the 2006 IEEE International Conference on Evolutionary Computation, Vancouver, BC, Canada, 16–21 July 2006; pp. 324–331. [Google Scholar]
  33. Qin, A.K.; Huang, V.L.; Suganthan, P.N. Differential evolution algorithm with strategy adaptation for global numerical optimization. IEEE Trans. Evol. Comput. 2009, 13, 398–417. [Google Scholar] [CrossRef]
  34. Islam, S.M.; Das, S.; Ghosh, S.; Roy, S.; Suganthan, P.N. An adaptive differential evolution algorithm with novel mutation and crossover strategies for global numerical optimization. IEEE Trans. Syst. Man Cybern. Part B Cybern. 2011, 42, 482–500. [Google Scholar] [CrossRef]
  35. Kumar, J.; Singh, A.K. Workload prediction in cloud using artificial neural network and adaptive differential evolution. Future Generation Comput. Syst. 2021, 81, 41–52. [Google Scholar] [CrossRef]
  36. Rosic’, M.B.; Simic´, M.I.; Pejovic´, P.V. An improved adaptive hybrid firefly differential evolution algorithm for passive target localization. Soft. Comput. 2021, 25, 5559–5585. [Google Scholar] [CrossRef]
  37. Yang, Z.; Tang, K.; Yao, X. Self-adaptive differential evolution with neighborhood search. In Proceedings of the 2008 IEEE Congress on Evolutionary Computation, Hong Kong, China, 1–6 June 2008; pp. 1110–1116. [Google Scholar]
  38. Yang, Z.; He, J.; Yao, X. Making a difference to differential evolution. In Advances in Metaheuristics for Hard Optimization; Michalewicz, Z., Siarry, P., Eds.; Springer: Berlin/Heidelberg, Germany, 2007; pp. 415–432. [Google Scholar]
  39. Xia, M.; Stallaert, J.; Whinston, A.B. Solving the combinatorial double auction problem. Eur. J. Oper. Res. 2005, 164, 239–251. [Google Scholar] [CrossRef]
  40. Data of Test Cases 1–10. Available online: https://drive.google.com/drive/folders/19Zj69lRsQP8z0uuiJOqfkHBegCvZE2Pe?usp=sharing (accessed on 11 August 2022).
  41. Data of Test Cases 11–14. Available online: https://drive.google.com/drive/folders/1FxECvDt_5ZuXCuL0zNQUXza2Bg82G2Ds?usp=sharing (accessed on 8 July 2023).
Figure 1. A flowchart of the proposed algorithm.
Figure 1. A flowchart of the proposed algorithm.
Algorithms 17 00009 g001
Figure 2. Average fitness function values for r D = r P = 0.1 with POP = 30.
Figure 2. Average fitness function values for r D = r P = 0.1 with POP = 30.
Algorithms 17 00009 g002
Figure 3. Average number of generations for Case 1 through Case 10 with r D = r P = r = 0.1 and POP = 30.
Figure 3. Average number of generations for Case 1 through Case 10 with r D = r P = r = 0.1 and POP = 30.
Algorithms 17 00009 g003
Figure 4. Average number of generations for Case 11 through Case 14 with r D = r P = r = 0.1 and POP = 30.
Figure 4. Average number of generations for Case 11 through Case 14 with r D = r P = r = 0.1 and POP = 30.
Algorithms 17 00009 g004
Figure 5. Convergence curves for a run of Case 5 for r D = r P = r = 0.1 with POP = 30.
Figure 5. Convergence curves for a run of Case 5 for r D = r P = r = 0.1 with POP = 30.
Algorithms 17 00009 g005
Figure 6. Convergence curves for a run of Case 11 for r D = r P = r = 0.1 with POP = 30.
Figure 6. Convergence curves for a run of Case 11 for r D = r P = r = 0.1 with POP = 30.
Algorithms 17 00009 g006
Figure 7. Convergence curves for a run of Case 12 for r D = r P = r = 0.1 with POP = 30.
Figure 7. Convergence curves for a run of Case 12 for r D = r P = r = 0.1 with POP = 30.
Algorithms 17 00009 g007
Figure 8. Convergence curves for a run of Case 13 for r D = r P = r = 0.1 with POP = 30.
Figure 8. Convergence curves for a run of Case 13 for r D = r P = r = 0.1 with POP = 30.
Algorithms 17 00009 g008
Figure 9. Convergence curves for a run of Case 14 for r D = r P = r = 0.1 with POP = 30.
Figure 9. Convergence curves for a run of Case 14 for r D = r P = r = 0.1 with POP = 30.
Algorithms 17 00009 g009
Figure 10. Average fitness function values for r D = r P = r = 0.1 with POP = 50.
Figure 10. Average fitness function values for r D = r P = r = 0.1 with POP = 50.
Algorithms 17 00009 g010
Figure 11. Average number of generations for Case 1 through Case 10 with r D = r P = r = 0.1 and POP = 50.
Figure 11. Average number of generations for Case 1 through Case 10 with r D = r P = r = 0.1 and POP = 50.
Algorithms 17 00009 g011
Figure 12. Average number of generations for Case 11 through Case 14 with r D = r P = r = 0.1 and POP = 50.
Figure 12. Average number of generations for Case 11 through Case 14 with r D = r P = r = 0.1 and POP = 50.
Algorithms 17 00009 g012
Figure 13. Convergence curves for a run of Case 5 for r D = r P = r = 0.1 with POP = 50.
Figure 13. Convergence curves for a run of Case 5 for r D = r P = r = 0.1 with POP = 50.
Algorithms 17 00009 g013
Figure 14. Convergence curves for a run of Case 11 for r D = r P = r = 0.1 with POP = 50.
Figure 14. Convergence curves for a run of Case 11 for r D = r P = r = 0.1 with POP = 50.
Algorithms 17 00009 g014
Figure 15. Convergence curves for a run of Case 12 for r D = r P = r = 0.1 with POP = 50.
Figure 15. Convergence curves for a run of Case 12 for r D = r P = r = 0.1 with POP = 50.
Algorithms 17 00009 g015
Figure 16. Convergence curves for a run of Case 13 for r D = r P = r = 0.1 with POP = 50.
Figure 16. Convergence curves for a run of Case 13 for r D = r P = r = 0.1 with POP = 50.
Algorithms 17 00009 g016
Figure 17. Convergence curves for a run of Case 14 for r D = r P = r = 0.1 with POP = 50.
Figure 17. Convergence curves for a run of Case 14 for r D = r P = r = 0.1 with POP = 50.
Algorithms 17 00009 g017
Table 1. Notation of symbols, variables, and parameters.
Table 1. Notation of symbols, variables, and parameters.
VariableMeaning
P Total passengers.
D Total drivers.
p Passenger index, where p { 1 , 2 , 3 , P } .
d Driver index, where d { 1 , 2 , 3 , , D } .
K The number of location indices for all passengers, i.e., K is equal to P .
k Location index, k { 1 , 2 , , K } .
J d Total bids of driver d { 1 , 2 , , D } .
j The j-th bid index of driver d { 1 , 2 , 3 , , D } with j { 1 , 2 , , J d } .
D B d j D B d j = ( q d j 1 1 , q d j 2 1 , q d j 3 1 , , q d j P 1 , q d j 1 2 , q d j 2 2 , q d j 3 2 , , q d j P 2 , o d j , c d j ) : driver d ’s j-th bid, where
q d j p : the no. of seats allocated for passenger p ,
o d j : the original cost of driver d { 1 , 2 , , D } if he/she travels alone,
c d j : the bid’s travel cost.
q d j p 1 No. of seats allocated at the pick-up location of passenger p , q d j p 1 = q d j p .
q d j p 2 No. of seats released at the drop-off location of passenger p , q d j p 2 = q d j p .
P B p P B p = ( s p 1 1 , s p 2 1 , s p 3 1 , , s p P 1 , s p 1 2 , s p 2 2 , s p 3 2 , s p P 2 , f p ) : passenger p ’s bid, where
s p k : the no. of seats requested by p at location k and
f p : the original cost of p without ridesharing.
s p k 1 No. of seats requested at passenger p ’s pick-up location, s p k 1 = { s p p   i f   k = p 0 o t h e r w i s e   .
s p k 2 No. of seats released at passenger p ’s drop-off location, s p k 2 = { s p p   i f   k = p 0 o t h e r w i s e   .
x d j Decision variable for driver d { 1 , 2 , , D } : x d j = 1 if D B d j is a winning bid and x d j = 0 otherwise.
y p Decision variable for passenger p { 1 , 2 , 3 , P } : y p = 1 if P B p is a winning bid and y p = 0 otherwise.
r D Drivers’ minimal expected cost savings discount.
r P Passengers’ minimal expected cost savings discount.
F ( x , y ) The objective function,
F ( x , y ) = ( p = 1 P y p ( f p ) ) + ( d = 1 D j = 1 J d x d j o d j ) ( d = 1 D j = 1 J d x d j c d j ) .
Γ d j The set of passengers on the ride of the bid D B d j of driver d { 1 , 2 , , D } .
F d j ( x , y ) Cost savings of the bid D B d j of driver d { 1 , 2 , , D } .
H d j ( x , y ) = [ ( p Γ d j y p f p ) + x d j o d j ( x d j c d j ) ] .
c f p d j Travel cost for passenger p Γ d j on the ride of bid D B d j .
Table 2. Parameters for different algorithms and test cases.
Table 2. Parameters for different algorithms and test cases.
AlgorithmParameters for Case 1 through Case 10Parameters for Case 11 through Case 14
SaNSDE-1-6POP = 30, Gen = 1000,
LP = 1000
POP = 50, Gen = 50,000,
LP = 1000
DE-1POP = 30, Gen = 1000,
C R = 0.5
F : a value arbitrarily selected from uniform (0, 2)
POP = 50, Gen = 50,000,
C R = 0.5
F : a value arbitrarily selected from uniform (0, 2)
DE-3POP = 30, Gen = 1000,
C R = 0.5
F : a value arbitrarily selected from uniform (0, 2)
POP = 50, Gen = 50,000,
C R = 0.5
F : a value arbitrarily selected from uniform (0, 2)
NSDEPOP = 30, Gen = 1000,
C R = 0.5,
F i = 0.5 r 1 + 0.5 , where r 1 is a
random value with Gaussian distribution N ( 0 , 1 ) .
POP = 50, Gen = 50,000,
C R = 0.5,
F i = 0.5 r 1 + 0.5 , where r 1 is a
random value with Gaussian distribution N ( 0 , 1 ) .
PSOPOP = 30, Gen = 1000,
c 1 = 0.4, c 2 = 0.6, ω = 0.4
POP = 50, Gen = 50,000,
c 1 = 0.4, c 2 = 0.6, ω = 0.4
Table 3. Fitness function values for discrete SaNSDE-1-6,DE-1, DE-3, NSDE and PSO algorithms with NP = 30; r D = r P = 0.1.
Table 3. Fitness function values for discrete SaNSDE-1-6,DE-1, DE-3, NSDE and PSO algorithms with NP = 30; r D = r P = 0.1.
CaseDPSaNSDE-1-6DE-1DE-3NSDEPSO
131032.99832.99832.99832.99832.998
251163.61563.61563.61563.61563.615
351241.71541.71541.71541.71541.2892
461251.1151.1151.1151.1150.9085
571330.06330.06330.06330.06328.4254
681472.32872.32872.32872.32870.2629
791589.0389.0389.0389.0380.8106
8101654.0254.0254.0254.0244.0023
9111774.0574.0574.0574.0549.356
10121850.950.062350.950.932.8349
112020112.906112.906112.906112.90697.7979
123030202.15196.9089200.1078200.7964141.6005
134040201.8256190.1664179.4244186.6996−1.5081
145050190.9436137.1625171.1756161.3107−3.9598
Table 4. Average number of generations for discrete SaNSDE-1-6,DE-1, DE-3, NSDE and PSO algorithms with N P = 30; r D = r P = 0.1.
Table 4. Average number of generations for discrete SaNSDE-1-6,DE-1, DE-3, NSDE and PSO algorithms with N P = 30; r D = r P = 0.1.
CaseDPSaNSDE-1-6DE-1DE-3NSDEPSO
13109.616.619.815.664.6
251119.232.236.929.1299.6
351229.439.747.643.3394.5
461219.643.850.344.1320.9
571316.731.844.137.3304.1
68142048.367.839.6375.6
791560.8101.413570.7553.6
8101648.261.378.659.5447.5
9111753.959.76564.2580.7
101218106.4136.8146.394.3489.3
112020191436.5817.1542.521,314.5
1230301392.55691.516,065.114,417.221,742.3
13404018,439.77715,185.827,574.427,260.122,734.2
14505019,390.517,131.722,745.336,742.725,822.2
Table 5. Average computation time (in mini-second) for discrete SaNSDE-1-6,DE-1, DE-3, NSDE and PSO algorithms with N P = 30; r D = r P = 0.1.
Table 5. Average computation time (in mini-second) for discrete SaNSDE-1-6,DE-1, DE-3, NSDE and PSO algorithms with N P = 30; r D = r P = 0.1.
CaseDPSaNSDE-1-6DE-1DE-3NSDEPSO
131011.4945.89076.71827.333915.1677
251131.007921.255518.099819.4591118.2929
351245.752522.087125.338226.1856130.6768
461242.078125.93725.815529.1062113.5842
571328.240118.430322.26416.7606100.2475
681448.784225.738134.975128.5755132.6016
7915139.789273.552387.186155.5141264.7854
81016111.023445.378554.525851.003200.8072
91117152.620750.24150.453164.2084286.7221
101218236.4897108.65106.480987.0697199.9287
112020798.397171134.2492142.0481496.59245,171.44
12303021,158.00717,419.8749,393.9848,106.8651,932.66
1340402,116,465.260,483.27113,286.7119,378.366,539.42
1450503,198,096.190,395.02118,089.5144,559.987,874.68
Table 6. Fitness function values for discrete SaNSDE-1-6,DE-1, DE-3, NSDE and PSO algorithms with N P = 50; r D = r P = 0.1.
Table 6. Fitness function values for discrete SaNSDE-1-6,DE-1, DE-3, NSDE and PSO algorithms with N P = 50; r D = r P = 0.1.
CaseDPSaNSDE-1-6DE-1DE-3NSDEPSO
131032.99832.99832.99832.99832.998
251163.61563.61563.61563.61563.615
351241.71541.71541.71541.71541.2892
461251.1151.1151.1151.1151.11
571330.06330.06330.06330.06330.063
681472.32872.32872.32872.32869.9483
791589.0389.0389.0389.0380.5986
8101654.0254.0254.0254.0246.8013
9111774.0574.0574.0574.0555.9356
10121850.950.950.950.931.1131
112020112.906112.906112.906112.906104.4808
123030202.15201.8116200.6549201.8116145.0514
134040202.4952194.0284190.4004185.9914−1.2835
145050192.343190.4874157.1781162.1984−3.6674
Table 7. Average number of generations for discrete SaNSDE-1-6,DE-1, DE-3, NSDE and PSO algorithms with N P = 50; r D = r P = 0.1.
Table 7. Average number of generations for discrete SaNSDE-1-6,DE-1, DE-3, NSDE and PSO algorithms with N P = 50; r D = r P = 0.1.
CaseDPSaNSDE-1-6DE-1DE-3NSDEPSO
131010.117.619.212.951.5
251118.730.23326.4127.3
351222.328.743.332.9437.2
461222.535.437.433.2468.6
571317.729.332.224.8247.1
6814303847.141.9416.4
791537.578.975.461.3366.4
8101635.351.566.348.6609.5
9111765.668.778.467611.5
10121879.483.6197.963.1521.5
11202098.6469.5829.5565.522,275.7
1230302216.78847.16494.726,379.520,738.9
13404015,231.414,944.618,551.721,166.530,168.9
14505016,730.228,597.932,025.925,665.833,480.8
Table 8. Average computation time (in mini-second) for discrete SaNSDE-1-6,DE-1, DE-3, NSDE and PSO algorithms with N P = 50; r D = r P = 0.1.
Table 8. Average computation time (in mini-second) for discrete SaNSDE-1-6,DE-1, DE-3, NSDE and PSO algorithms with N P = 50; r D = r P = 0.1.
CaseDPSaNSDE-1-6DE-1DE-3NSDEPSO
131010.215610.145210.16628.432418.3854
251131.028324.850324.449823.731257.3819
351242.879623.84932.759231.2488212.022
461233.395831.491529.16133.3068205.7338
571333.695125.083625.851525.2025105.7623
681457.526132.374737.170142.8416213.5017
7915117.598183.478778.154475.29205.419
8101684.531165.244871.693871.265385.2561
91117105.970687.503193.582686.378413.8388
101218148.0639101.1059232.009896.711400.4348
112020679.745671395.582805.8731912.74551,907.85
12303014,772.42335,531.4725,760.51118,419.757,773.8
1340402,016,957.383,749.26104,356131,989.4108,469.1
1450502,890,556.2171,886.2233,575.7201,288.2145,752.5
Disclaimer/Publisher’s Note: The statements, opinions and data contained in all publications are solely those of the individual author(s) and contributor(s) and not of MDPI and/or the editor(s). MDPI and/or the editor(s) disclaim responsibility for any injury to people or property resulting from any ideas, methods, instructions or products referred to in the content.

Share and Cite

MDPI and ACS Style

Hsieh, F.-S. A Self-Adaptive Meta-Heuristic Algorithm Based on Success Rate and Differential Evolution for Improving the Performance of Ridesharing Systems with a Discount Guarantee. Algorithms 2024, 17, 9. https://doi.org/10.3390/a17010009

AMA Style

Hsieh F-S. A Self-Adaptive Meta-Heuristic Algorithm Based on Success Rate and Differential Evolution for Improving the Performance of Ridesharing Systems with a Discount Guarantee. Algorithms. 2024; 17(1):9. https://doi.org/10.3390/a17010009

Chicago/Turabian Style

Hsieh, Fu-Shiung. 2024. "A Self-Adaptive Meta-Heuristic Algorithm Based on Success Rate and Differential Evolution for Improving the Performance of Ridesharing Systems with a Discount Guarantee" Algorithms 17, no. 1: 9. https://doi.org/10.3390/a17010009

Note that from the first issue of 2016, this journal uses article numbers instead of page numbers. See further details here.

Article Metrics

Back to TopTop