In this section, we conduct experiments by generating the data for several test cases and then apply four discrete variants of PSO algorithms, discrete variants of DE algorithms and the discrete firefly algorithm to find solutions for the test cases. First, we briefly introduce the data and parameters for test cases and the metaheuristic algorithms. We then compare different metaheuristic algorithms based on the results of experiments. The outputs obtained by applying different metaheuristic algorithms to each test case are summarized and analyzed in this section.

#### 5.1. Data and Parameters

The input data are created by arbitrarily selecting a real geographical area first. Then, locations of drivers and passengers are randomly generated based on the selected geographical area. Therefore, the procedure for selecting input data is general and can be applied to other geographical areas in the real world. The test cases are generated based on a real geographical area in the central part of Taiwan. The data for each example are represented by bids. The data (bids) for these test cases are available for download from:

To illustrate the elements of typical test cases’ data, the details of the data for a small example is introduced first.

An Example:

Consider a ridesharing system with one driver and four passengers. The origins and destinations of the driver and passengers are listed in

Table 9.

Table 10 shows the bid generated for Driver 1 by applying the bid generation procedure in Appendix II of Reference [

31]. The bids generated for all passengers are shown in

Table 11. Four discrete variants of PSO algorithms, six discrete variants of DE algorithms and the discrete firefly algorithm (FA) are applied to find solutions for this example. The parameters used for each metaheuristic algorithm in this study are as follows.

The parameters for the discrete CCPSO algorithm are:

$DS$ = {2, 5, 10}

${\omega}_{1}$ = 0.5

${\omega}_{2}$ = 0.5

$\theta $ = 1.0

${V}_{\mathrm{max}}$ = 4

$MAX\_GEN$ = 10,000

The parameters for the discrete PSO algorithm are:

$\omega $ = 0.4

${c}_{1}$ = 0.4

${c}_{2}$ = 0.6

${V}_{\mathrm{max}}$ = 4

$MAX\_GEN$ = 10,000

The parameters for the firefly algorithm (FA) are:

${\beta}_{0}$ = 1.0

$\gamma $ = 0.2

$\alpha $ = 0.2

${V}_{\mathrm{max}}$ = 4

$MAX\_GEN$ = 10,000

The parameters for the CLPSO algorithm are:

$\omega $ = 0.4

${c}_{1}$ = 0.4

${c}_{2}$ = 0.6

${p}_{c}$ = 0.5

${V}_{\mathrm{max}}$ = 4

$MAX\_GEN$ = 10,000

The parameters for the DE algorithm are:

For this example, all the above algorithms obtain the same solution

${x}_{1,1}=1$,

${y}_{1,1}=1$,

${y}_{2,1}=0$,

${y}_{3,1}=0$,

${y}_{4,1}=0$. The solution indicates that Driver 1 will share a ride with Passenger 1 only to optimize monetary incentive. The objective function value for this solution is 0.12.

Figure 6 shows the results on Google Maps.

#### 5.2. Comparison of Different Metaheuristic Algorithms

Experiments for several test cases have been conducted to compare different metaheuristic algorithms. The parameters for running all the algorithms for Case 1 through Case 6 are the same as those used by the Example in

Section 5.1, with the exception that the population size

$NP$ is either set to 10 or 30. The parameters for running all the algorithms for Case 7 and Case 8 are the same as those used by the Example in

Section 5.1, with the exception that the maximum number of generations

$MAX\_GEN$ is set to 50,000 and the population size

$NP$ is either set to 10 or 30. The results are as follows.

By setting the population size

$NP$ to 10 and applying the discrete PSO, discrete firefly (FA), discrete ALPSO and discrete CCPSO algorithms to solve the problems, we obtained the results of

Table 12. It indicates that the discrete CCPSO algorithm outperforms the discrete firefly algorithm and discrete ALPSO algorithm. Although the average fitness function values of the discrete PSO algorithm and the discrete CCPSO algorithm are the same for small test cases (Case 1 through Case 6), the average number of generations needed by the discrete CCPSO algorithm is less than that of the discrete PSO algorithm for most test cases. In particular, the discrete CCPSO algorithm outperforms the discrete PSO algorithm in terms of the average fitness function values and the average number of generations needed for larger test cases (Case 7 and Case 8). This indicates that the discrete CCPSO algorithm outperforms the discrete PSO algorithm for most test cases when the population size

$NP$ is 10.

For

Table 12, the corresponding bar chart for the average fitness function values of discrete PSO, CCPSO, CLPSO, ALPSO and FA algorithms is shown in

Figure 7.

For

Table 12, the corresponding bar chart for the average number of generations of discrete PSO, CCPSO, CLPSO, ALPSO and FA algorithms is shown in

Figure 8.

By setting the population size

$NP$ to 10 and applying the discrete DE algorithm with six well-known strategies to solve the problems, we obtained

Table 13 and

Table 14. By comparing

Table 13,

Table 14 and

Table 12, it indicates that the discrete CCPSO algorithm outperforms the discrete DE algorithm for most test cases. The discrete DE algorithm performs as good as the discrete CCPSO algorithm only for Test Case 1. For Test Case 2, only two DE strategies (Strategy 1 and Strategy 3) perform as good as the discrete CCPSO algorithm. The discrete CCPSO algorithm outperforms the discrete DE algorithm for Test Case 3 through Test Case 6. This indicates that the discrete CCPSO algorithm outperforms the discrete PSO algorithm when the population size

$NP$ is 10.

For

Table 13 and

Table 14, the corresponding bar chart for the average fitness function values of discrete DE algorithms with strategy 1, strategy 2, strategy 3, strategy 4, strategy 5 and strategy 6 (population size = 10) is shown in

Figure 9.

For

Table 13 and

Table 14, the corresponding bar chart for the average number of generations of discrete DE algorithms with strategy 1, strategy 2, strategy 3, strategy 4, strategy 5 and strategy 6 (population size = 10) is shown in

Figure 10.

We obtained

Table 15 by applying the discrete PSO, the discrete firefly, the discrete CLPSO, discrete ALPSO and discrete CCPSO algorithms to solve the problems with population size

$NP=30$.

Table 15 indicates that the average fitness function values found by the discrete PSO, discrete ALPSO and discrete CCPSO algorithms are the same for small test cases (Test Case 1 through Test Case 6). The results indicate that the discrete PSO, discrete ALPSO and discrete CCPSO algorithms outperform the discrete FA and discrete CLPSO algorithms for small test cases (Test Case 1 through Test Case 6). The discrete CCPSO algorithm outperforms the discrete PSO, the discrete FA, the discrete CLPSO and the discrete ALPSO algorithms for larger test cases (Test Case 7 and Test Case 8). The discrete CCPSO algorithm does not just outperform the discrete PSO, the discrete FA, the discrete CLPSO and the discrete ALPSO algorithms in terms of the average fitness function values found, also, the average numbers of generations needed by the discrete CCPSO algorithm to find the best solutions are less than those of the discrete PSO algorithm and the discrete ALPSO algorithm for most test cases. This indicates that the discrete CCPSO algorithm is more efficient than the discrete PSO, the discrete FA, the discrete CLPSO and the discrete ALPSO algorithms for most test cases when the population size

$NP$ is 30.

For

Table 15, the corresponding bar chart for the average number of generations of discrete PSO, CCPSO, CLPSO, ALPSO and firefly (FA) algorithms is shown in

Figure 11.

For

Table 15, the corresponding bar chart for the average number of generations of discrete PSO, CCPSO, CLPSO, ALPSO and FA algorithms is shown in

Figure 12.

We obtained

Table 16 and

Table 17 by setting the population size

$NP$ to 30 and applying the discrete DE algorithm with six well-known strategies to solve the problems.

Table 16 and

Table 17 indicate that the performance of the discrete DE algorithm is improved for most test cases. For example, the average fitness function values obtained by the discrete DE algorithm with Strategy 3 are the same as those obtained by the discrete PSO, discrete ALPSO and discrete CCPSO algorithms for small test cases (Test Case 1 through Test Case 6). Although the average fitness function values obtained by the discrete DE algorithm with other strategies are no greater than those obtained by the discrete PSO, discrete ALPSO and discrete CCPSO algorithms, the performance of the discrete DE algorithm are close to the discrete PSO, discrete ALPSO and discrete CCPSO algorithms. The discrete CCPSO algorithm still outperforms the discrete DE algorithm with any of the six strategies for larger test cases (Test Case 7 and Test Case 8). Note that the average number of generations needed by the discrete DE algorithm to find the best solutions is significantly reduced for all strategies for most test cases. This indicates that the discrete DE algorithm works more efficiently for larger population size.

For

Table 16 and

Table 17, the corresponding bar chart for the average fitness function values of discrete DE algorithms with strategy 1, strategy 2, strategy 3, strategy 4, strategy 5 and strategy 6 (population size = 30) is shown in

Figure 13.

For

Table 16 and

Table 17, the corresponding bar chart for the average number of generations of discrete DE algorithms with strategy 1, strategy 2, strategy 3, strategy 4, strategy 5 and strategy 6 (population size = 30) is shown in

Figure 14.

To study the convergence speed of the discrete PSO, discrete CLPSO, discrete ALPSO, discrete DE and discrete FA algorithms, we compare the convergence curves of simulation runs for several test cases.

Figure 15 shows convergence curves for Test Case 2 (

$NP$ = 10).

It indicates that the FA performs the worst for this simulation run. For this simulation run, the PSO algorithm converges the fastest. The CCPSO algorithm, the CLPSO algorithm, the ALPSO algorithm, the DE algorithm with Strategy 1, the DE algorithm with Strategy 3 and the DE algorithm with Strategy 6 also converge to the best fitness values very fast. The slowest algorithms are the firefly algorithm, the DE algorithm with Strategy 4 and the DE algorithm with Strategy 5.

Figure 16 shows convergence curves for Test Case 5 (

$NP$ = 10). Again, FA is the slowest among all the algorithms for this simulation run.

Figure 17 shows convergence curves for Test Case 7 (

$NP$ = 10). FA and the CLPSO algorithms are the two slowest algorithms in terms of convergence rate.

Figure 18 shows convergence curves for Test Case 8 (

$NP$ = 10). The CCPSO algorithm and the DE algorithm with Strategy 1 are the fastest to converge to the best fitness values. All the other algorithms fail to converge to the best values.

Figure 19 shows convergence curves for Test Case 2 (

$NP$ = 30). All algorithms converge very fast to the best solution in this simulation run for Test Case 2 when population size

$NP$ = 30.

Figure 20 shows convergence curves for Test Case 5 (

$NP$ = 30). All algorithms converge very fast to the best solution in this simulation run for Test Case 5 when population size

$NP$ = 30.

For larger test cases, depending on the algorithm used, the convergence speed varies significantly.

Figure 21 shows convergence curves for Test Case 7 (

$NP$ = 30). The two fastest algorithms are the CCPSO algorithm and the DE algorithm with Strategy 6. In this run, the firefly algorithm is the slowest one and fails to converge to the best fitness value.

The variation in convergence speed is significant for another larger test case, Test Case 8.

Figure 22 shows convergence curves for Test Case 8 (

$NP$ = 30). The fastest algorithm is the CCPSO algorithm. In this run, the slowest algorithms are the PSO, CLPSO, ALPSO and FA. Note that the CLPSO, ALPSO and FA fail to converge to the best fitness value.

The results presented above indicate that the proposed discrete CCPSO algorithm outperforms other metaheuristic algorithms. Superiority of the discrete CCPSO algorithm is due to its capability to balance exploration and exploitation in the evolution processes. According to (11), $S{W}_{s}.{z}_{id}$ is updated with a Gaussian random variable $\mathrm{N}({\omega}_{1}S{W}_{s}.{z}_{i}^{p}+{\omega}_{2}S{W}_{s}.\widehat{z},{\left(\theta \left|S{W}_{s}.{z}_{i}^{p}-S{W}_{s}.\widehat{z}\right|\right)}^{2})$. Exploration and exploitation characteristics of the discrete CCPSO algorithm strongly depend on the magnitude $\left|S{W}_{s}.{z}_{i}^{p}-S{W}_{s}.\widehat{z}\right|$. The balance between exploration and exploitation of the discrete CCPSO is as follows: As long as the personal best of a particle is not the same as the global best, the magnitude $\left|S{W}_{s}.{z}_{i}^{p}-S{W}_{s}.\widehat{z}\right|$ is nonzero. In this case, the variance of the Gaussian random variable $\mathrm{N}({\omega}_{1}S{W}_{s}.{z}_{i}^{p}+{\omega}_{2}S{W}_{s}.\widehat{z},{\left(\theta \left|S{W}_{s}.{z}_{i}^{p}-S{W}_{s}.\widehat{z}\right|\right)}^{2})$ will be nonzero. If the magnitude $\left|S{W}_{s}.{z}_{i}^{p}-S{W}_{s}.\widehat{z}\right|$ is large, the variance of the Gaussian random variable $\mathrm{N}({\omega}_{1}S{W}_{s}.{z}_{i}^{p}+{\omega}_{2}S{W}_{s}.\widehat{z},{\left(\theta \left|S{W}_{s}.{z}_{i}^{p}-S{W}_{s}.\widehat{z}\right|\right)}^{2})$ will be large. This makes it possible to search a larger region around ${\omega}_{1}S{W}_{s}.{z}_{i}^{p}+{\omega}_{2}S{W}_{s}.\widehat{z}$. If the personal best of a particle is close to the global best, the magnitude $\left|S{W}_{s}.{z}_{i}^{p}-S{W}_{s}.\widehat{z}\right|$ will be small. In this case, the variance of the Gaussian random variable $\mathrm{N}({\omega}_{1}S{W}_{s}.{z}_{i}^{p}+{\omega}_{2}S{W}_{s}.\widehat{z},{\left(\theta \left|S{W}_{s}.{z}_{i}^{p}-S{W}_{s}.\widehat{z}\right|\right)}^{2})$ tends to be small. This leads to a very small search region. Therefore, exploration and exploitation characteristics of the discrete CCPSO algorithm are balanced automatically in the solution searching process through the magnitude $\left|S{W}_{s}.{z}_{i}^{p}-S{W}_{s}.\widehat{z}\right|$.

Although the subject of ridesharing/carpooling has been studied for over a decade, it is still not in widespread use. Motivated by this fact, several papers attempt to identify factors that contribute to ridesharing/carpooling [

12,

13,

14]. Waerden et al. investigated factors that stimulate people to change from car drivers to carpooling. They identified several attributes that may influence the attractiveness of carpooling, including flexibility/uncertainty in travel time, costs and number of persons [

13]. Their study indicates that the most influential are costs and time-related attributes in carpooling. Delhomme and Gheorghiu investigated the socio-demographics and transportation accessibility factual data to study motivational factors that differentiate carpoolers from non-carpoolers and highlight the main determinants of carpooling [

12]. Their study indicates that the main incentives for carpooling are financial gains, time savings and environmental protection. Shaheen et al. studied a special type of carpooling called casual carpooling, which provides participants the benefits through access to a high-occupancy vehicle lane with tolling discounts. The study of Shaheen et al. indicates that monetary savings and time savings are the main motivations for casual carpooling participation [

14].

Santos and Xavier [

15] studied the ridesharing problem in which money is considered as an incentive. The study by Watel and Faye [

16] focuses on a taxi-sharing problem, called Dial-a-Ride problem with money as an incentive (DARP-M). They studied the taxi-sharing problem to reduce the cost of passengers. Watel and Faye defined three variants of the DARP-M problem: max-DARP-M, max-1-DARP-M and 1-DARP-M, to analyze their complexity. The objective of max-DARP-M is to drive the maximum number of passengers under the assumption of unlimited number of taxis available. The max-1-DARP-M problem is used to find the maximum number of passenger that can be transported by a taxi. The 1-DARP-M problem is used to decide whether it is possible to drive at least one passenger under the constraints stated. Although the max-DARP-M, max-1-DARP-M and 1-DARP-M problems can be used to analyze complexity, they do not reflect real application scenarios. In addition, the problem to optimize overall monetary incentive is not addressed in References [

15] and [

16]. In Reference [

17], Hsieh considered a monetary incentive in ridesharing systems and proposed a PSO-based solution algorithm for it. However, there is still lack of study on comparison with other variants of metaheuristic algorithms for solving the monetary incentive optimization problem formulated in this study. The results presented in this study serve to compare the effectiveness of applying several different metaheuristic algorithms to solve the monetary incentive optimization problem. The effectiveness of applying metaheuristic algorithms to solving a problem is assessed by performance and efficiency. Performance is reflected in the average number of fitness function values found by the metaheuristic algorithm applied, whereas efficiency is measured by the average number of generations needed to find the best fitness function values in the simulation runs. In this study, the comparative study on performance and efficiency was helpful for assessing the effectiveness of applying these metaheuristic approaches to solve the monetary incentive optimization problem.