A Comparative Study of Several Metaheuristic Algorithms to Optimize Monetary Incentive in Ridesharing Systems

: The strong demand on human mobility leads to excessive numbers of cars and raises the problems of serious tra ﬃ c congestion, large amounts of greenhouse gas emissions, air pollution and insu ﬃ cient parking space in cities. Although ridesharing is a potential transport mode to solve the above problems through car-sharing, it is still not widely adopted. Most studies consider non-monetary incentive performance indices such as travel distance and successful matches in ridesharing systems. These performance indices fail to provide a strong incentive for ridesharing. The goal of this paper is to address this issue by proposing a monetary incentive performance indicator to improve the incentives for ridesharing. The objectives are to improve the incentive for ridesharing through a monetary incentive optimization problem formulation, development of a solution methodology and comparison of di ﬀ erent solution algorithms. A non-linear integer programming optimization problem is formulated to optimize monetary incentive in ridesharing systems. Several discrete metaheuristic algorithms are developed to cope with computational complexity for solving the above problem. These include several discrete variants of particle swarm optimization algorithms, di ﬀ erential evolution algorithms and the ﬁreﬂy algorithm. The e ﬀ ectiveness of applying the above algorithms to solve the monetary incentive optimization problem is compared based on experimental results.


Introduction
The strong demand on human mobility leads to high use of private cars and raises the problems of serious traffic congestion, large amounts of greenhouse gas emissions, air pollution and insufficient parking space in cities. Ridesharing is a transportation model that can be applied to solve the above problems in cities by sharing cars or vehicles. In the past decade, the subject of ridesharing has attracted the attention of researchers around the world. Survey papers of the studies on ridesharing/carpooling problems can be found in References [1,2]. The results of the exploratory study by Genikomsakis et al. [3] indicate that people maintain a positive attitude towards the ridesharing and carpooling transportation modes. In the literature, several systems have been developed to verify ridesharing/carpooling systems. For example, a carpooling system for universities called PoliUniPool was presented in Reference [4]. In terms of the characteristics and modes of operation, carpooling/ridesharing systems can be classified into daily carpooling problem [5], long-term carpooling problem [6] and dynamic ridesharing problem [7].
To effectively support operations of ridesharing systems, several issues need to be addressed. An important issue is to extract rides that are amenable for ridesharing based on mobility traces.

Research Question, Goals and Objectives
Based on the discussion on deficiencies of existing studies above, the research questions addressed in this study are as follows: How to deal with monetary incentive issue in ridesharing systems and formulate the decision problem? How to develop metaheuristic algorithms for this problem? How about effectiveness of different metaheuristic algorithms for solving this decision problem? The goals of this paper are to answer the above questions through the development of a solution methodology. The objectives of this paper are to (1) address the monetary incentive issue in ridesharing systems by proposing a monetary incentive performance indicator to improve the incentives for ridesharing, (2) formulate the monetary incentive optimization problem, (3) develop solution metaheuristic algorithms based on modification of several variants of PSO algorithms, DE algorithms and firefly algorithm for the nonlinear monetary incentive optimization problem and (4) study the effectiveness of these metaheuristic algorithms based on the results of experiments. For the optimization of monetary incentive in ridesharing systems, a non-linear integer programming optimization problem is formulated. As finding a solution for the non-linear integer programming optimization problem is computationally challenging, metaheuristic approaches are adopted to solve the problem. The contributions of this paper include: (1) proposing a problem formulation and associated solution algorithms to find and benefit the drivers and passengers with the highest monetary incentive for ridesharing and (2) comparing the effectiveness of several metaheuristic algorithms to provide a guideline for selecting an algorithm to solve the formulated problem.
The monetary incentive optimization problem is a non-linear integer programming problem with a large number of constraints. The computational complexity arises from constraints in the problem formulation. An effective method to deal with constraints must be employed to develop a solution algorithm. In the literature, several ways to deal with constraints in constrained optimization problems have been proposed. The concept behind these constraint-handling methods can be classified into three categories: preserving feasibility of solutions, penalizing infeasible solutions and biasing feasible over infeasible solutions. Among the well-known methods to handle constraints, the penalty function method [36,37] and the biasing feasible over infeasible solutions method [38] are the most popular ones. The biasing feasible over infeasible solutions method does not rely on setting of penalty coefficients or parameters to handle constraints. Therefore, we adopt a method based on biasing feasible over infeasible solutions [36] to develop metaheuristic algorithms. As the solutions evolve in the continuous space, transformation of solutions to discrete space is required for each metaheuristic algorithm developed in this study.
To assess effectiveness of the proposed metaheuristic algorithms, we conduct experiments for several test cases. The numerical results indicate that the discrete variant of the cooperative, coevolving particle swarm optimization algorithm is significantly more effective than other metaheuristic algorithms in solving a constrained optimization problem with nonlinear objective function and binary decision variables. The results confirm the effectiveness of the proposed algorithm.
This paper is different from the brief paper in Reference [17] in that it is extended with a number of metaheuristic algorithms and a comparative study of these metaheuristic algorithms through experiments. As this paper focuses on optimization of monetary incentive in ridesharing systems, it is also different from the work in Reference [31] in two aspects. The objective function is not separable with respect to the decision variables and is a nonlinear function. This makes the optimization problem hard to solve. The problem formulated in this paper is a nonlinear integer programming problem with discrete decision variables, which is different from the one studied in References [10,31].
The remainder of this paper is organized as follows. In Section 2, we describe and formulate the monetary incentive optimization problem. The fitness function and constraint handling method are described in Section 3. The metaheuristic algorithms will be presented in Section 4. In Section 5, we will present and discuss the results. In Section 6, we conclude this paper.

Problem Formulation
In this section, the problem formulation will be introduced. The notations required for problem formulation are summarized in Table 1. A ridesharing system consists of a number of passengers, a number of drivers and the cars owned by drivers. To describe the ridesharing problem, we use P to denote the number of passengers in the system and a passenger is referred to as p, where p ∈ {1, 2, 3, . . . , P}. Similarly, we use D to denote the number of drivers in the system and a driver is referred to as d, where d ∈ {1, 2, 3, . . . , D}. For simplicity, it is assumed that each driver owns one car and we use d to refer to the car of driver d whenever it is clear from the context. The main function of ridesharing systems is to match the set D of drivers with the set P of passengers according to their requirements. To describe the itinerary of driver d, let Lo d and Le d be the origin and destination of driver d, respectively. R d = (Lo d , Le d ) denotes the itinerary of driver d ∈ {1, 2, 3, . . . , D}. Similarly, to describe the itinerary of passenger p ∈ {1, 2, 3, . . . , P}, we use Lo p and Le p to denote the origin and destination of passenger p, respectively. R p = (Lo p , Le p ) represents the itinerary of passenger p.
To formulate the monetary incentive optimization problem in ridesharing systems, the variables/symbols and their meaning are defined in Table 1.
In the ridesharing system considered in this study, drivers and passengers express their transportation requirements by submitting bids. Before submitting bids, it is assumed that drivers and passengers apply some bid generation software to generate bids. For example, the Bid Generation Procedures in Appendix II of Reference [31] may be applied to generate bids for drivers and passengers. However, the problem formulation to be presented below does not presume the use of Bid Generation Procedures in Appendix II of Reference [31] to generate bids. Drivers and passengers may apply any other bid generation software to generate bids. The bids generated for each driver are based on his/her itinerary, R d . Let J d denote the number of bids generated and submitted by driver d. The jth bid submitted by driver d is represented by DB dj = (q dj1 , q dj2 , q dj3 , . . . , q djK , o dj , c dj ). The input data c dj and o dj corresponding to the routing cost and the original travel cost of the jth bid of driver d can be measured in any currency (such as USD, EURO, RMB, NT, etc.) appropriate for a specific application scenario.
Similarly, the bid generated for a passenger p is also based on his/her itinerary, R p . It is assumed that each passenger p submits only one bid. The bid generated and submitted by passenger p is represented by PB p = (s p1 , s p2 , s p3 , . . . , s pK , f p ). The parameter f p denoting the original price of passenger p without ridesharing can be measured in any currency (such as USD, EURO, RMB, NT, etc.) appropriate for a specific application scenario. A bid PB p is called a winning bid if passenger p is selected by the ridesharing system to share a ride with a driver.
The problem is to match drivers and passengers based on their itineraries to optimize monetary incentive. An objective function defined based on the ratio between the cost savings and the original costs is proposed to achieve this goal. The objective function is defined as follows: Based on the above monetary objective function, the objective of the monetary incentive optimization problem is to find the set of passengers and drivers such that the ratio between the cost savings and the original costs is maximized. We formulate the following optimization problem: x dj q djk ≥ P p=1 y p s pk ∀k ∈ {1, 2, . . . , K} x dj ∈ {0, 1} ∀d, ∀j The problem is subject to several constraints: (a) the capacity constraints in Equation (3), (b) the cost savings constraints in Equation (4) and (c) the winning bid constraint for each driver in Equation (5). In addition, the values of decision variables must be binary, as specified in Equation (6) and Equation (7). Note that the objective function F(x, y) for the optimization of monetary incentive is nonlinear. It is more complex than the simple linear function in Reference [31].

Fitness Function and Constraint Handling Method
The problem formulated previously is a constrained optimization problem with binary decision variables and rational objective function. The objective function F(x, y) in the above optimization problem is not an additively separable function, which is different from the additively separable objective function used in Reference [31]. In addition, the computational complexity arises from constraints in the problem formulation. An effective method to deal with constraints must be employed to develop a solution algorithm.
In existing literature, several ways to deal with constraints in the constrained optimization problem have been proposed and used in solving constrained optimization problems. The concept behind these constraint-handling methods includes preservation of solutions' feasibility, penalizing infeasible solutions and discrimination of feasible/infeasible solutions. Among the well-known methods to handle constraints, the penalty function method [36,37] and the discrimination of feasible/infeasible solutions method [38] are widely used methods.
Although the penalty function method is very easy to use, it relies on proper setting of the penalty coefficients. The performance of the penalty function method depends on the penalty coefficients. Improper penalty coefficients often seriously degrade the performance. In addition, there still lacks a good way to set the penalty coefficients properly. The approach of discriminating feasible/infeasible solutions method works without relying on coefficients or parameters to handle constraints. Therefore, we adopt the approach of discriminating feasible/infeasible solutions [38]. The details of applying this approach are described below.
The discriminating feasible/infeasible solutions method characterizes a feasible solution with the corresponding original objective function value. For an infeasible solution, instead of calculating the fitness function value due to infeasible solutions, the method of discriminating a feasible/infeasible solutions method characterizes the fitness function value of infeasible solutions based on the objective function value of the current population's worst feasible solution. In this way, the performance degradation due to improper setting of penalty coefficients can be avoided.
As the discriminating feasible/infeasible solutions method [38] is adopted to handle constraints, we need to find the objective function value of the current population's worst feasible solution. To achieve this, we define S f = (x, y) (x, y) } as a feasible solution in the current population satisfying constraints (3)~(7). Then, the objective function value of the current population's worst feasible solution can be calculated by S f min = min For the discriminating feasible/infeasible solutions method, we define the fitness function F 1 (x, y) as follows: x dj q djk − P p=1 y p s pk , 0.0)), x dj , 0.0)

Implementation of Discrete Metaheuristic Algorithms
In this study, several discrete versions of metaheuristic algorithms for solving the monetary incentive optimization problem have been developed. The notations required for describing these metaheuristic algorithms are summarized in Table 2. These metaheuristic algorithms include discrete PSO, discrete comprehensive learning particle swarm optimization (CLPSO) [39], discrete DE, discrete firefly, discrete adaptive learning particle swarm optimization (ALPSO) and discrete cooperative coevolving particle swarm optimization (CCPSO) algorithms. All these algorithms will be compared by conducting experiments in the next section. As all these discrete algorithms are developed based on transformation of continuous solutions to binary solutions, all the algorithms mentioned above will be presented in this paper with the exception of the discrete ALPSO algorithm. Just like other algorithms to be presented in this section, the discrete ALPSO is also developed [40] by adding procedure to transform continuous solution space to binary solution space in the evolution processes. Therefore, the discrete ALPSO algorithm will not be presented in this section to save space. For each algorithm presented in this section, the stopping criteria is based on the maximum number of generations parameter, MAX_GEN.  the global best, and Gz n is the nth element of the vector Gz, where n ∈ {1, 2, . . . , N} c 1 a non-negative real parameter less than 1 c 2 a non-negative real parameter less than 1 r 1 a random variable with uniform distribution U(0, 1) r 2 a random variable with uniform distribution U(0, 1) V max the maximum value of velocity s(v in ) the probability of the bit v in p c the learning probability, where p c is greater than 0 and less than 1 rp a random variable with uniform distribution U(0, 1) NS total number of swarms SW s a swarm, where s ∈ {1, 2, . . . , NS} ω 1 and ω 2 weighting factors for updating velocity; 0 ≤ ω 1 ≤ 1; ω 1 + ω 2 = 1 θ a scaling factor for updating velocity; θ > 0 DS a set of integers ds an integer ds is selected from DŜ z t ω 1 he context vector obtained by concatenating the global best particles from all NS swarms SW s .z i the ith particle in the sth swarm SW s SW s .z p i the personal best of the ith particle in swarm SW s SW s .ẑ the global best of the component of the swarm SW s r i j the distance between firefly i and firefly j γ the light absorption coefficient β 0 the attractiveness when the distance r ij between firefly i and firefly j is zero  The original standard PSO algorithm was proposed in Reference [23] to solve problems with continuous solution space. In the standard PSO algorithm, each particle of the swarm adjusts its trajectory based on its own flying experience as well as the flying experiences from other particles.
. . , z iN ) be the velocity and position of particle i, respectively. Let Pz i = (Pz i1 , Pz i2 , Pz i3 , . . . , Pz iN ) be the best historical position (the personal best) of particle i. Let Gz = (Gz 1 , Gz 2 , Gz 3 , . . . , Gz N ) be the best historical position (the global best) of the entire swarm. The velocity and position of particle i on dimension n in iteration t + 1, where n ∈ {1, 2, 3, . . . , N}, are updated as follows: where ω is a parameter called the inertia weight, c 1 and c 2 are positive constants referred to as cognitive and social parameters respectively, and r 1 and r 2 are random numbers generated from a uniform distribution in the region of [0, 1]. As the original PSO algorithm was proposed to solve problems with continuous solution space, it cannot be applied to solve the optimization problem with binary decision variables. Therefore, transformation of solutions from continuous space to binary space is required. As the solution space of the optimization problem formulated is binary, we define a procedure CS(a) in Table 3 to transform continuous solution (CS) space to binary solution space. The procedure CS(a) will be invoked as needed in different variants of PSO algorithms and DE algorithm. Table 3. The pseudocode for procedure CS(a) . Figure 1 shows the flowchart for the discrete PSO algorithm. Table 4 shows the pseudocode for the discrete PSO algorithm. Generate random variables to calculate the velocity for each particle according to (8) Transform velocity to binary Update the personal best and the global best

No End
Yes Figure 1. A flowchart for the discrete particle swarm optimization (PSO) algorithm. Table 4. The pseudocode for the discrete PSO algorithm.

Discrete PSO Algorithm
Transform each element of in v into one or zero  Discrete PSO Algorithm t ← 0 Generate particle z i for each i ∈ {1, 2, . . . , NP} in the population Evaluate the fitness function F 1 (z i ) for particle z i , where i ∈ {1, 2, . . . , NP} Determine the personal best Pz i for each i ∈ {1, 2, . . . , NP} Determine the global best Gz of swarm While (stopping criteria not satisfied)

Discrete CLPSO Algorithm
The CLPSO algorithm [40] is a well-known variant of the PSO algorithm. The CLPSO algorithm works based on the learning probability, p c , where p c is greater than 0 and less than 1. The CLPSO algorithm generates a random number for each dimension of particle i. The corresponding dimension of particle i will learn from its own personal best in case the random number generated is larger than p c . Otherwise, the particle will learn from the personal best of the better of two randomly selected particles, m 1 and m 2 . Let m denote the better particle. The velocity of the particle in dimension n will be updated according to (10) as follows: As the original CLPSO algorithm was proposed to solve problems with continuous solution space, it cannot be applied to solve the optimization problem with binary decision variables. Therefore, transformation of solutions from continuous space to binary space is needed. Figure 2 shows the flowchart for the discrete CLPSO algorithm. Table 5 shows the pseudocode for the discrete CLPSO algorithm.

Discrete CLPSO Algorithm
Determine the personal best Pz i of each particle Determine the global best Gz of the swarm While (stopping criteria not satisfied) For each n ∈ {1, 2, . . . , N} Generate a random variable rp with uniform distribution U(0, 1) If rp > p c Generate r 1 , a random variable with uniform distribution U(0, 1) Generate r 2 , a random variable with uniform distribution U(0, 1) Calculate the velocity of particle i as follows Randomly select two distinct integers m 1 and m 2 from {1, 2, . . . , NP} 9,  Yes Generate a random variable rp for each particle in each dimension rp greater than learning probability ?
Randomly select two distinct particles and update the velocity according to formula (10) based on the better particle

Discrete CCPSO Algorithm
The discrete CCPSO algorithm adopts a divide-and-conquer strategy to decompose a problem into smaller ones. To decompose the original higher dimensional problem into smaller subproblems, a set of integers, DS, is defined first. Then, an integer ds is first selected from DS. The discrete CCPSO algorithm decomposes the decision variables into NS swarms based on the selected integer ds. To present the discrete CCPSO algorithm, let z be the vector obtained by concatenating decision variables x and y of the problem formulation. To share information in the cooperative coevolution processes, letẑ be the context vector obtained by concatenating the global best particles from all NS swarms. There are three algorithmic parameters used to update particles in the discrete CCPSO algorithm. These parameters include the weighting factor ω 1 and ω 2 as well as the scaling factor θ. The ith particle in the sth swarm SW s is denoted as SW s .z i . The personal best of the ith particle in swarm SW s is denoted as SW s .z p i . Let the global best of the component of the swarm SW s be denoted as SW s .ẑ. In each iteration, personal best and velocity of each particle are updated according to (11) as follows: The swarm best and the context vector are also updated. Figure 3 shows the flowchart for the discrete CCPSO algorithm. The discrete CCPSO algorithm can be presented by the pseudocode in Table 6. processes, let ẑ be the context vector obtained by concatenating the global best particles from all NS swarms. There are three algorithmic parameters used to update particles in the discrete CCPSO algorithm. These parameters include the weighting factor 1  and 2  as well as the scaling factor  .
The ith particle in the sth swarm s SW is denoted as i s z SW . . The personal best of the ith particle in swarm s SW is denoted as p i s z SW . . Let the global best of the component of the swarm s SW be denoted as z SW s. . In each iteration, personal best and velocity of each particle are updated according to (11) as follows: The swarm best and the context vector are also updated. Figure 3 shows the flowchart for the discrete CCPSO algorithm. The discrete CCPSO algorithm can be presented by the pseudocode in Table 6.

Discrete Firefly Algorithm
The firefly algorithm was inspired by the flashing pattern and behavior of fireflies [24]. The firefly algorithm works under the assumption that all fireflies are attracted to other ones independently of their sex. The less bright fireflies tend to flow towards the brighter ones. A firefly moves randomly if no brighter one can be found. To describe the discrete firefly (DF) algorithm, we use r ij to denote the distance between firefly i and firefly j and use γ to denote the light absorption coefficient. The parameter β 0 represents the attractiveness when the distance r ij between firefly i and firefly j is zero and β 0 e −γr 2 ij is the attractiveness when the distance r ij between firefly i and firefly j is greater than zero. Let ε t in be a random number generated from a uniform distribution in [0, 1] and α t be a constant parameter in [0, 1]. We define a function T(x) = e 2|x| −1 e 2|x| +1 to transform a real value into a value in [0, 1]. Consider two fireflies, i and j, in the discrete firefly PSO algorithm. Firefly i will move towards firefly i if the fitness function value of firefly i is less than that of firefly j according to (12) as follows: Loosely speaking, for the monetary incentive optimization problem in ridesharing systems, a firefly may play the role of a leader or a follower depending on the quality of the solution it has found. In the ridesharing problem, a solution with better fitness function value means that it will provide a stronger monetary incentive for drivers and passengers to share rides. A firefly will play the role of a leader when it finds a solution with better fitness function value. In this case, it will attract other fireflies whose solutions are inferior to move closer to it in order to find better solutions.
The discrete firefly algorithm can be described by a flowchart and a pseudocode. Figure 4 shows the flowchart for the discrete firefly algorithm. Table 7 shows the pseudocode for the discrete firefly algorithm.
In the ridesharing problem, a solution with better fitness function value means that it will provide a stronger monetary incentive for drivers and passengers to share rides. A firefly will play the role of a leader when it finds a solution with better fitness function value. In this case, it will attract other fireflies whose solutions are inferior to move closer to it in order to find better solutions.
The discrete firefly algorithm can be described by a flowchart and a pseudocode. Figure 4 shows the flowchart for the discrete firefly algorithm.
Move firefly i toward j in N-dimensional space according to the following formula: ij (z jn − z in ) + αε in Update firefly i as follows: Generate rsid, a random variable with uniform distribution U(0, 1)

Discrete Differential Evolution Algorithm
There are three parameters in the DE algorithm, including scale factor, crossover rate and the number of individuals in the population (population size). To describe the DE algorithm, let N, NP, CR and F denote the problem dimension, population size, crossover rate and scale factor, respectively. The scale factor for individual i is denoted by F i . We use tv i to denote a mutant vector for individual i, where i ∈ {1, 2, . . . , NP }.
A DE algorithm starts by generating a random population of trial individuals z i = (z i1 , z i2 , z i3 , . . . , z iN ) for i = 1, 2, . . . , NP and n = 1, 2, . . . , N. The DE algorithm attempts to improve the quality of the trial populations. In each iteration, a new generation replaces the previous one. In the course of generating a new generation, a new mutant vector tv i = (tv i1 , tv i2 , tv i3 , . . . , tv iN ) is generated for individual z i by applying a search strategy or mutation strategy, S. In existing literature, several search strategies have been proposed for DE. Six well-known search strategies in DE are as follows: where the index b refers to the best individual z b . Similarly, z r 1 n , z r 2 n , z r 3 n , z r 4 n and z r 5 n are some random individuals, namely, r 1 , r 2 , r 3 , r 4 and r 5 are random integers between 1 and NP.
The standard DE algorithm was originally proposed to solve the problem in continuous search space. Therefore, it is necessary to transform each element of individual vector into one or zero in the discrete DE algorithm. The discrete DE algorithm can be described by a flowchart and a pseudocode. Figure 5 shows the flowchart for the discrete DE algorithm. Table 8 shows the pseudocode for the discrete DE algorithm.

Results
In this section, we conduct experiments by generating the data for several test cases and then apply four discrete variants of PSO algorithms, discrete variants of DE algorithms and the discrete firefly algorithm to find solutions for the test cases. First, we briefly introduce the data and parameters for test cases and the metaheuristic algorithms. We then compare different metaheuristic algorithms based on the results of experiments. The outputs obtained by applying different metaheuristic algorithms to each test case are summarized and analyzed in this section.

Data and Parameters
The input data are created by arbitrarily selecting a real geographical area first. Then, locations of drivers and passengers are randomly generated based on the selected geographical area. Therefore, the procedure for selecting input data is general and can be applied to other geographical areas in the real world. The test cases are generated based on a real geographical area in the central part of Taiwan. The data for each example are represented by bids. The data (bids) for these test cases are available for download from: https://drive.google.com/drive/folders/1pl_bYMtWUCbGODYDmDr2aX2h_7WXSeNZ?usp= sharing.
To illustrate the elements of typical test cases' data, the details of the data for a small example is introduced first.
An Example: Consider a ridesharing system with one driver and four passengers. The origins and destinations of the driver and passengers are listed in Table 9. Table 10 shows the bid generated for Driver 1 by applying the bid generation procedure in Appendix II of Reference [31]. The bids generated for all passengers are shown in Table 11. Four discrete variants of PSO algorithms, six discrete variants of DE algorithms and the discrete firefly algorithm (FA) are applied to find solutions for this example. The parameters used for each metaheuristic algorithm in this study are as follows. The parameters for the discrete CCPSO algorithm are: The parameters for the discrete PSO algorithm are: The parameters for the firefly algorithm (FA) are: The parameters for the CLPSO algorithm are: The parameters for the DE algorithm are: CR = 0.5 F: Gaussian random variable with zero mean and standard deviation set to 1.0 V max = 4 MAX_GEN = 10,000 Population size NP = 10 For this example, all the above algorithms obtain the same solution x 1,1 = 1, y 1,1 = 1, y 2,1 = 0, y 3,1 = 0, y 4,1 = 0. The solution indicates that Driver 1 will share a ride with Passenger 1 only to optimize monetary incentive. The objective function value for this solution is 0.12. Figure 6 shows the results on Google Maps.

Population size NP = 10
For this example, all the above algorithms obtain the same solution . The solution indicates that Driver 1 will share a ride with Passenger 1 only to optimize monetary incentive. The objective function value for this solution is 0.12. Figure 6 shows the results on Google Maps.

Comparison of Different Metaheuristic Algorithms
Experiments for several test cases have been conducted to compare different metaheuristic algorithms. The parameters for running all the algorithms for Case 1 through Case 6 are the same as those used by the Example in Section 5.1, with the exception that the population size NP is either set to 10 or 30. The parameters for running all the algorithms for Case 7 and Case 8 are the same as those used by the Example in Section 5.1, with the exception that the maximum number of generations MAX_GEN is set to 50,000 and the population size NP is either set to 10 or 30. The results are as follows.
By setting the population size NP to 10 and applying the discrete PSO, discrete firefly (FA), discrete ALPSO and discrete CCPSO algorithms to solve the problems, we obtained the results of Table 12. It indicates that the discrete CCPSO algorithm outperforms the discrete firefly algorithm and discrete ALPSO algorithm. Although the average fitness function values of the discrete PSO algorithm and the discrete CCPSO algorithm are the same for small test cases (Case 1 through Case 6), the average number of generations needed by the discrete CCPSO algorithm is less than that of the discrete PSO algorithm for most test cases. In particular, the discrete CCPSO algorithm outperforms the discrete PSO algorithm in terms of the average fitness function values and the average number of generations needed for larger test cases (Case 7 and Case 8). This indicates that the discrete CCPSO algorithm outperforms the discrete PSO algorithm for most test cases when the population size NP is 10. For Table 12, the corresponding bar chart for the average fitness function values of discrete PSO, CCPSO, CLPSO, ALPSO and FA algorithms is shown in Figure 7.  Table 12.
For Table 12, the corresponding bar chart for the average number of generations of discrete PSO, CCPSO, CLPSO, ALPSO and FA algorithms is shown in Figure 8.  Table 12.
For Table 12, the corresponding bar chart for the average number of generations of discrete PSO, CCPSO, CLPSO, ALPSO and FA algorithms is shown in Figure 8.
By setting the population size NP to 10 and applying the discrete DE algorithm with six well-known strategies to solve the problems, we obtained Tables 13 and 14. By comparing Tables 12-14, it indicates that the discrete CCPSO algorithm outperforms the discrete DE algorithm for most test cases. The discrete DE algorithm performs as good as the discrete CCPSO algorithm only for Test Case 1. For Test Case 2, only two DE strategies (Strategy 1 and Strategy 3) perform as good as the discrete CCPSO algorithm. The discrete CCPSO algorithm outperforms the discrete DE algorithm for Test Case 3 through Test Case 6. This indicates that the discrete CCPSO algorithm outperforms the discrete PSO algorithm when the population size NP is 10.  Table 12.
For Table 12, the corresponding bar chart for the average number of generations of discrete PSO, CCPSO, CLPSO, ALPSO and FA algorithms is shown in Figure 8.  Table 12.
By setting the population size NP to 10 and applying the discrete DE algorithm with six wellknown strategies to solve the problems, we obtained Tables 13 and 14. By comparing Table 13, Table  14 and Table 12, it indicates that the discrete CCPSO algorithm outperforms the discrete DE algorithm for most test cases. The discrete DE algorithm performs as good as the discrete CCPSO algorithm only for Test Case 1. For Test Case 2, only two DE strategies (Strategy 1 and Strategy 3) perform as good as the discrete CCPSO algorithm. The discrete CCPSO algorithm outperforms the discrete DE algorithm for Test Case 3 through Test Case 6. This indicates that the discrete CCPSO algorithm outperforms the discrete PSO algorithm when the population size NP is 10.  For Tables 13 and 14, the corresponding bar chart for the average fitness function values of discrete DE algorithms with strategy 1, strategy 2, strategy 3, strategy 4, strategy 5 and strategy 6 (population size = 10) is shown in Figure 9. ISPRS Int. J. Geo-Inf. 2020, 9, x FOR PEER REVIEW 25 of 37 For Tables 13 and 14, the corresponding bar chart for the average number of generations of discrete DE algorithms with strategy 1, strategy 2, strategy 3, strategy 4, strategy 5 and strategy 6 (population size = 10) is shown in Figure 10. We obtained Table 15 by applying the discrete PSO, the discrete firefly, the discrete CLPSO, discrete ALPSO and discrete CCPSO algorithms to solve the problems with population size 30 = NP . Table 15 indicates that the average fitness function values found by the discrete PSO, discrete ALPSO and discrete CCPSO algorithms are the same for small test cases (Test Case 1 through Test Case 6). The results indicate that the discrete PSO, discrete ALPSO and discrete CCPSO algorithms outperform the discrete FA and discrete CLPSO algorithms for small test cases (Test Case 1 through For Tables 13 and 14, the corresponding bar chart for the average number of generations of discrete DE algorithms with strategy 1, strategy 2, strategy 3, strategy 4, strategy 5 and strategy 6 (population size = 10) is shown in Figure 10. For Tables 13 and 14, the corresponding bar chart for the average number of generations of discrete DE algorithms with strategy 1, strategy 2, strategy 3, strategy 4, strategy 5 and strategy 6 (population size = 10) is shown in Figure 10. We obtained Table 15 by applying the discrete PSO, the discrete firefly, the discrete CLPSO, discrete ALPSO and discrete CCPSO algorithms to solve the problems with population size 30 = NP . Table 15 indicates that the average fitness function values found by the discrete PSO, discrete ALPSO and discrete CCPSO algorithms are the same for small test cases (Test Case 1 through Test Case 6).
The results indicate that the discrete PSO, discrete ALPSO and discrete CCPSO algorithms outperform the discrete FA and discrete CLPSO algorithms for small test cases (Test Case 1 through We obtained Table 15 by applying the discrete PSO, the discrete firefly, the discrete CLPSO, discrete ALPSO and discrete CCPSO algorithms to solve the problems with population size NP = 30. Table 15 indicates that the average fitness function values found by the discrete PSO, discrete ALPSO and discrete CCPSO algorithms are the same for small test cases (Test Case 1 through Test Case 6).
The results indicate that the discrete PSO, discrete ALPSO and discrete CCPSO algorithms outperform the discrete FA and discrete CLPSO algorithms for small test cases (Test Case 1 through Test Case 6). The discrete CCPSO algorithm outperforms the discrete PSO, the discrete FA, the discrete CLPSO and the discrete ALPSO algorithms for larger test cases (Test Case 7 and Test Case 8). The discrete CCPSO algorithm does not just outperform the discrete PSO, the discrete FA, the discrete CLPSO and the discrete ALPSO algorithms in terms of the average fitness function values found, also, the average numbers of generations needed by the discrete CCPSO algorithm to find the best solutions are less than those of the discrete PSO algorithm and the discrete ALPSO algorithm for most test cases. This indicates that the discrete CCPSO algorithm is more efficient than the discrete PSO, the discrete FA, the discrete CLPSO and the discrete ALPSO algorithms for most test cases when the population size NP is 30. For Table 15, the corresponding bar chart for the average number of generations of discrete PSO, CCPSO, CLPSO, ALPSO and firefly (FA) algorithms is shown in Figure 11.  Table 15.
For Table 15, the corresponding bar chart for the average number of generations of discrete PSO, CCPSO, CLPSO, ALPSO and FA algorithms is shown in Figure 12.  Table 15.
For Table 15, the corresponding bar chart for the average number of generations of discrete PSO, CCPSO, CLPSO, ALPSO and FA algorithms is shown in Figure 12.  Table 15.
For Table 15, the corresponding bar chart for the average number of generations of discrete PSO, CCPSO, CLPSO, ALPSO and FA algorithms is shown in Figure 12.  Table 7.
We obtained Tables 16 and 17 by setting the population size NP to 30 and applying the discrete DE algorithm with six well-known strategies to solve the problems. Tables 16 and 17 indicate that the performance of the discrete DE algorithm is improved for most test cases. For example, the average fitness function values obtained by the discrete DE algorithm with Strategy 3 are the same as those obtained by the discrete PSO, discrete ALPSO and discrete CCPSO algorithms for small test cases (Test Case 1 through Test Case 6). Although the average fitness function values obtained by the discrete DE algorithm with other strategies are no greater than those obtained by the discrete PSO, discrete ALPSO and discrete CCPSO algorithms, the performance of the discrete DE algorithm are close to the discrete PSO, discrete ALPSO and discrete CCPSO algorithms. The discrete CCPSO  Table 7.
We obtained Tables 16 and 17 by setting the population size NP to 30 and applying the discrete DE algorithm with six well-known strategies to solve the problems. Tables 16 and 17 indicate that the performance of the discrete DE algorithm is improved for most test cases. For example, the average fitness function values obtained by the discrete DE algorithm with Strategy 3 are the same as those obtained by the discrete PSO, discrete ALPSO and discrete CCPSO algorithms for small test cases (Test Case 1 through Test Case 6). Although the average fitness function values obtained by the discrete DE algorithm with other strategies are no greater than those obtained by the discrete PSO, discrete ALPSO and discrete CCPSO algorithms, the performance of the discrete DE algorithm are close to the discrete PSO, discrete ALPSO and discrete CCPSO algorithms. The discrete CCPSO algorithm still outperforms the discrete DE algorithm with any of the six strategies for larger test cases (Test Case 7 and Test Case 8). Note that the average number of generations needed by the discrete DE algorithm to find the best solutions is significantly reduced for all strategies for most test cases. This indicates that the discrete DE algorithm works more efficiently for larger population size.  For Tables 16 and 17, the corresponding bar chart for the average fitness function values of discrete DE algorithms with strategy 1, strategy 2, strategy 3, strategy 4, strategy 5 and strategy 6 (population size = 30) is shown in Figure 13. For Tables 16 and 17, the corresponding bar chart for the average number of generations of discrete DE algorithms with strategy 1, strategy 2, strategy 3, strategy 4, strategy 5 and strategy 6 (population size = 30) is shown in Figure 14. For Tables 16 and 17, the corresponding bar chart for the average number of generations of discrete DE algorithms with strategy 1, strategy 2, strategy 3, strategy 4, strategy 5 and strategy 6 (population size = 30) is shown in Figure 14.
For Tables 16 and 17, the corresponding bar chart for the average number of generations of discrete DE algorithms with strategy 1, strategy 2, strategy 3, strategy 4, strategy 5 and strategy 6 (population size = 30) is shown in Figure 14. To study the convergence speed of the discrete PSO, discrete CLPSO, discrete ALPSO, discrete DE and discrete FA algorithms, we compare the convergence curves of simulation runs for several test cases. Figure 15 shows convergence curves for Test Case 2 ( NP = 10). To study the convergence speed of the discrete PSO, discrete CLPSO, discrete ALPSO, discrete DE and discrete FA algorithms, we compare the convergence curves of simulation runs for several test cases. Figure 15 shows convergence curves for Test Case 2 (NP = 10).
It indicates that the FA performs the worst for this simulation run. For this simulation run, the PSO algorithm converges the fastest. The CCPSO algorithm, the CLPSO algorithm, the ALPSO algorithm, the DE algorithm with Strategy 1, the DE algorithm with Strategy 3 and the DE algorithm with Strategy 6 also converge to the best fitness values very fast. The slowest algorithms are the firefly algorithm, the DE algorithm with Strategy 4 and the DE algorithm with Strategy 5.                          For larger test cases, depending on the algorithm used, the convergence speed varies significantly. Figure 21 shows convergence curves for Test Case 7 ( NP = 30). The two fastest algorithms are the CCPSO algorithm and the DE algorithm with Strategy 6. In this run, the firefly algorithm is the slowest one and fails to converge to the best fitness value. For larger test cases, depending on the algorithm used, the convergence speed varies significantly. Figure 21 shows convergence curves for Test Case 7 (NP = 30). The two fastest algorithms are the CCPSO algorithm and the DE algorithm with Strategy 6. In this run, the firefly algorithm is the slowest one and fails to converge to the best fitness value. The variation in convergence speed is significant for another larger test case, Test Case 8. Figure   22 shows convergence curves for Test Case 8 ( NP = 30). The fastest algorithm is the CCPSO algorithm.
In this run, the slowest algorithms are the PSO, CLPSO, ALPSO and FA. Note that the CLPSO, ALPSO and FA fail to converge to the best fitness value. The variation in convergence speed is significant for another larger test case, Test Case 8. Figure 22 shows convergence curves for Test Case 8 (NP = 30). The fastest algorithm is the CCPSO algorithm. In this run, the slowest algorithms are the PSO, CLPSO, ALPSO and FA. Note that the CLPSO, ALPSO and FA fail to converge to the best fitness value.
DE algorithms with population size NP = 30.
The variation in convergence speed is significant for another larger test case, Test Case 8. Figure   22 shows convergence curves for Test Case 8 ( NP = 30). The fastest algorithm is the CCPSO algorithm.
In this run, the slowest algorithms are the PSO, CLPSO, ALPSO and FA. Note that the CLPSO, ALPSO and FA fail to converge to the best fitness value. The results presented above indicate that the proposed discrete CCPSO algorithm outperforms other metaheuristic algorithms. Superiority of the discrete CCPSO algorithm is due to its capability to balance exploration and exploitation in the evolution processes. According to (11),  Santos and Xavier [15] studied the ridesharing problem in which money is considered as an incentive. The study by Watel and Faye [16] focuses on a taxi-sharing problem, called Dial-a-Ride problem with money as an incentive (DARP-M). They studied the taxi-sharing problem to reduce the cost of passengers. Watel and Faye defined three variants of the DARP-M problem: max-DARP-M, max-1-DARP-M and 1-DARP-M, to analyze their complexity. The objective of max-DARP-M is to drive the maximum number of passengers under the assumption of unlimited number of taxis available. The max-1-DARP-M problem is used to find the maximum number of passenger that can be transported by a taxi. The 1-DARP-M problem is used to decide whether it is possible to drive at least one passenger under the constraints stated. Although the max-DARP-M, max-1-DARP-M and 1-DARP-M problems can be used to analyze complexity, they do not reflect real application scenarios. In addition, the problem to optimize overall monetary incentive is not addressed in References [15] and [16]. In Reference [17], Hsieh considered a monetary incentive in ridesharing systems and proposed a PSO-based solution algorithm for it. However, there is still lack of study on comparison with other variants of metaheuristic algorithms for solving the monetary incentive optimization problem formulated in this study. The results presented in this study serve to compare the effectiveness of applying several different metaheuristic algorithms to solve the monetary incentive optimization problem. The effectiveness of applying metaheuristic algorithms to solving a problem is assessed by performance and efficiency. Performance is reflected in the average number of fitness function values found by the metaheuristic algorithm applied, whereas efficiency is measured by the average number of generations needed to find the best fitness function values in the simulation runs. In this study, the comparative study on performance and efficiency was helpful for assessing the effectiveness of applying these metaheuristic approaches to solve the monetary incentive optimization problem.

Conclusions
Motivated by the fact that the performance indices considered in most studies on ridesharing are not directly linked to monetary incentive, this study focused on how to deal with the monetary incentive issue in ridesharing systems to promote the ridesharing transportation model. This study contributes to the literature by (1) proposing a monetary incentive performance index, (2) formulating a monetary incentive optimization problem formulation, (3) developing solution algorithms based on several metaheuristic approaches to find drivers and passengers with the highest monetary incentive for ridesharing and (4) providing a guideline for selecting a proper metaheuristic algorithm to solve the problem through comparing the effectiveness of several metaheuristic algorithms.
As the performance indicator for this problem is a highly non-linear and non-separable function and the decision variables are discrete, the optimization problem is a non-linear integer programming problem. It is computationally complex to find the solutions for this problem. To cope with computational complexity, metaheuristic approaches were adopted to solve the problem. Several discrete algorithms have been developed based on variants of PSO, DE and the firefly algorithms. To assess the effectiveness of these proposed algorithms, experiments for several test cases were conducted. The results show that the discrete CCPSO algorithm outperformed the variants discrete PSO algorithms, DE algorithm and the firefly algorithm when the population size was 10 or 30. The average fitness function values obtained by applying the discrete PSO algorithm were the same as those obtained by the discrete CCPSO algorithm for small test cases when the population size was 10. But, the average number of generations needed by the discrete CCPSO algorithm was less than those of the discrete PSO algorithm for most test cases. This indicates that the discrete CCPSO algorithm outperformed the discrete PSO algorithm when the population size was 10. The effectiveness of the variants of DE algorithms were significantly improved when the population size was increased to 30. The variants of DE algorithms were competitive with the discrete CCPSO algorithm in terms of performance and efficiency when the population size was increased. However, the variants of DE algorithms were more sensitive to population size in comparison with the discrete CCPSO algorithm. The discrete CCPSO algorithm worked effectively even if the population size was 10. These results provide guidelines for selecting metaheuristic algorithms to solve the monetary incentive optimization problem. The results indicate that many metaheuristic algorithms perform poorly as the problem grows due to premature convergence and/or the complex structure of constraints in the optimization problem, although they can be applied to develop solution algorithms for the nonlinear integer programming problem. The cooperative coevolution approach copes with these shortcomings of metaheuristic algorithms.
An interesting future research direction is to develop new methods to study the sensitivity with respect to algorithmic parameters of metaheuristic algorithms such as the population size. As the results indicate that the discrete CCPSO algorithm can solve non-linear integer programming problems effectively, another future research direction is to study whether the proposed discrete CCPSO algorithm can effectively deal with an integer programming problem with non-linear objective function and non-linear constraints. The first step to carry out this future research direction is to identify the optimization problem with non-linear objective function and non-linear constraints to pave the way for extending the proposed discrete CCPSO algorithm to general non-linear integer programming problems. The second step is to develop a method for handling non-linear constraints.