Sign in to use this feature.

Years

Between: -

Subjects

remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline

Journals

Article Types

Countries / Regions

Search Results (6)

Search Parameters:
Keywords = ride-hailing dispatch

Order results
Result details
Results per page
Select all
Export citation of selected articles as:
18 pages, 1759 KB  
Article
DHDRDS: A Deep Reinforcement Learning-Based Ride-Hailing Dispatch System for Integrated Passenger–Parcel Transport
by Huanwen Ge, Xiangwang Hu and Ming Cheng
Sustainability 2025, 17(9), 4012; https://doi.org/10.3390/su17094012 - 29 Apr 2025
Cited by 2 | Viewed by 5079
Abstract
Urban transportation demands are growing rapidly. Concurrently, the sharing economy continues to expand. These dual trends establish ride-hailing dispatch as a critical research focus for building sustainable smart transportation systems. Current ride-hailing systems only serve passengers. However, they ignore an important opportunity: transporting [...] Read more.
Urban transportation demands are growing rapidly. Concurrently, the sharing economy continues to expand. These dual trends establish ride-hailing dispatch as a critical research focus for building sustainable smart transportation systems. Current ride-hailing systems only serve passengers. However, they ignore an important opportunity: transporting packages. This limitation causes two issues: (1) wasted vehicle capacity in cities, and (2) extra carbon emissions from cars waiting idle. Our solution combines passenger rides with package delivery in real time. This dual-mode strategy achieves four benefits: (1) better matching of supply and demand, (2) 38% less empty driving, (3) higher vehicle usage rates, and (4) increased earnings for drivers in changing conditions. We built a Dynamic Heterogeneous Demand-aware Ride-hailing Dispatch System (DHDRDS) using deep reinforcement learning. It works by (a) managing both passenger and package requests on one platform and (b) allocating vehicles efficiently to reduce the environmental impact. An empirical validation confirms the developed framework’s superiority over conventional approaches across three critical dimensions: service efficiency, carbon footprint reduction, and driver profits. Specifically, DHDRDS achieves at least a 5.1% increase in driver profits and an 11.2% reduction in vehicle idle time compared to the baselines, while ensuring that the majority of customer waiting times are within the system threshold of 8 min. By minimizing redundant vehicle trips and optimizing fleet utilization, this research provides a novel solution for advancing sustainable urban mobility systems aligned with global carbon neutrality goals. Full article
Show Figures

Figure 1

24 pages, 21054 KB  
Article
Research on Order Allocation Strategies for Ride-Hailing Platforms Considering Passenger Order Cancellations During Order Overflow
by Yan Xia, Wuyong Qian and Chunyi Ji
Appl. Sci. 2025, 15(6), 3243; https://doi.org/10.3390/app15063243 - 16 Mar 2025
Cited by 2 | Viewed by 4498
Abstract
The rise of ride-hailing services has brought new riding experiences for passengers and exerted a profound impact on the traditional taxi market. To enhance patrol efficiency, increase revenue, and promote sustainable development in the taxi industry, traditional taxis have actively undergone transformation and [...] Read more.
The rise of ride-hailing services has brought new riding experiences for passengers and exerted a profound impact on the traditional taxi market. To enhance patrol efficiency, increase revenue, and promote sustainable development in the taxi industry, traditional taxis have actively undergone transformation and adopted an integrated “online-offline” operating model, combining online order acceptance with offline order-taking. Meanwhile, a considerable number of orders are canceled by passengers after being accepted, leading to a waste of platform capacity, reduced order dispatch efficiency, and additional empty-running costs for drivers. This issue is particularly prominent during peak hours with order overflow. Based on the changes in taxi order acceptance during order overflow, this paper constructs a model for passenger order cancellation probability during peak hours, examines the relationship between regional order density and the proportion of offline taxi order acceptance, discusses the impact of regional order density changes on the passenger order cancellation probability and stakeholder returns, and proposes optimal order dispatch strategies for ride-hailing platforms with different order densities. Additionally, it analyzes more optimal taxi operating models under varying arrival states. The research findings provide more scientific and efficient operational recommendations for ride-hailing platforms and taxis, promoting sustainable development in the entire travel market and thereby contributing to a greener and more efficient travel environment. Full article
Show Figures

Figure 1

22 pages, 7651 KB  
Article
PROLIFIC: Deep Reinforcement Learning for Efficient EV Fleet Scheduling and Charging
by Junchi Ma, Yuan Zhang, Zongtao Duan and Lei Tang
Sustainability 2023, 15(18), 13553; https://doi.org/10.3390/su151813553 - 11 Sep 2023
Cited by 9 | Viewed by 3247
Abstract
Electric vehicles (EVs) are becoming increasingly popular in ride-hailing services, but their slow charging speed negatively affects service efficiency. To address this challenge, we propose PROLIFIC, a deep reinforcement learning-based approach for efficient EV scheduling and charging in ride-hailing services. The objective of [...] Read more.
Electric vehicles (EVs) are becoming increasingly popular in ride-hailing services, but their slow charging speed negatively affects service efficiency. To address this challenge, we propose PROLIFIC, a deep reinforcement learning-based approach for efficient EV scheduling and charging in ride-hailing services. The objective of PROLIFIC is to minimize passenger waiting time and charging time cost. PROLIFIC formulates the EV scheduling problem as a Markov decision process and integrates a distributed charging scheduling management model and a centralized order dispatching model. By using a distributed deep Q-network, the agents can share charging and EV supply information to make efficient interactions between charging and dispatch decisions. This approach reduces the curse of dimensionality problem and improves the training efficiency of the neural network. The proposed approach is validated in three typical scenarios with different spatiotemporal distribution characteristics of passenger order, and the results demonstrate that PROLIFIC significantly reduces the passenger waiting time and charging time cost in all three scenarios compared to baseline algorithms. Full article
Show Figures

Figure 1

21 pages, 7924 KB  
Article
Optimization of On-Demand Shared Autonomous Vehicle Deployments Utilizing Reinforcement Learning
by Karina Meneses-Cime, Bilin Aksun Guvenc and Levent Guvenc
Sensors 2022, 22(21), 8317; https://doi.org/10.3390/s22218317 - 29 Oct 2022
Cited by 11 | Viewed by 3595
Abstract
Ride-hailed shared autonomous vehicles (SAV) have emerged recently as an economically feasible way of introducing autonomous driving technologies while serving the mobility needs of under-served communities. There has also been corresponding research work on optimization of the operation of these SAVs. However, the [...] Read more.
Ride-hailed shared autonomous vehicles (SAV) have emerged recently as an economically feasible way of introducing autonomous driving technologies while serving the mobility needs of under-served communities. There has also been corresponding research work on optimization of the operation of these SAVs. However, the current state-of-the-art research in this area treats very simple networks, neglecting the effect of a realistic other traffic representation, and is not useful for planning deployments of SAV service. In contrast, this paper utilizes a recent autonomous shuttle deployment site in Columbus, Ohio, as a basis for mobility studies and the optimization of SAV fleet deployment. Furthermore, this paper creates an SAV dispatcher based on reinforcement learning (RL) to minimize passenger wait time and to maximize the number of passengers served. The created taxi-dispatcher is then simulated in a realistic scenario while avoiding generalization or over-fitting to the area. It is found that an RL-aided taxi dispatcher algorithm can greatly improve the performance of a deployment of SAVs by increasing the overall number of trips completed and passengers served while decreasing the wait time for passengers. Full article
(This article belongs to the Special Issue Advances in Sensor Related Technologies for Autonomous Driving)
Show Figures

Figure 1

21 pages, 4921 KB  
Article
Understanding the Spatiotemporal Variation of High-Efficiency Ride-Hailing Orders: A Case Study of Haikou, China
by Mingyang Du, Xuefeng Li, Mei-Po Kwan, Jingzong Yang and Qiyang Liu
ISPRS Int. J. Geo-Inf. 2022, 11(1), 42; https://doi.org/10.3390/ijgi11010042 - 9 Jan 2022
Cited by 13 | Viewed by 3861
Abstract
Understanding the spatiotemporal variation of high-efficiency ride-hailing orders (HROs) is helpful for transportation network companies (TNCs) to balance the income of drivers through reasonable order dispatch, and to alleviate the imbalance between supply and demand by improving the pricing mechanism, so as to [...] Read more.
Understanding the spatiotemporal variation of high-efficiency ride-hailing orders (HROs) is helpful for transportation network companies (TNCs) to balance the income of drivers through reasonable order dispatch, and to alleviate the imbalance between supply and demand by improving the pricing mechanism, so as to promote the sustainable and healthy development of the ride-hailing industry and urban transportation. From the perspective of TNCs for order management, this study investigates the spatiotemporal variation of HROs and common ride-hailing orders (CROs) for ride-hailing services using the trip data of Didi Chuxing in Haikou, China. Ordinary least squares (OLS) and geographically weighted regression (GWR) models are established to examine the factors that affect the densities of HROs and CROs during different time periods, such as morning, evening, afternoon and night, with considering various built environment variables. The OLS models show that factors including road density, average travel time rate, companies and enterprises and transportation facilities have significant impacts on HROs and CROs for most periods. The results of the GWR models are consistent with the global regression results and show the local effects of the built environment on HROs and CROs in different regions. Full article
Show Figures

Figure 1

19 pages, 6772 KB  
Article
Reinforcement Learning for Optimizing Driving Policies on Cruising Taxis Services
by Kun Jin, Wei Wang, Xuedong Hua and Wei Zhou
Sustainability 2020, 12(21), 8883; https://doi.org/10.3390/su12218883 - 26 Oct 2020
Cited by 6 | Viewed by 3179
Abstract
As the key element of urban transportation, taxis services significantly provide convenience and comfort for residents’ travel. However, the reality has not shown much efficiency. Previous researchers mainly aimed to optimize policies by order dispatch on ride-hailing services, which cannot be applied in [...] Read more.
As the key element of urban transportation, taxis services significantly provide convenience and comfort for residents’ travel. However, the reality has not shown much efficiency. Previous researchers mainly aimed to optimize policies by order dispatch on ride-hailing services, which cannot be applied in cruising taxis services. This paper developed the reinforcement learning (RL) framework to optimize driving policies on cruising taxis services. Firstly, we formulated the drivers’ behaviours as the Markov decision process (MDP) progress, considering the influences after taking action in the long run. The RL framework using dynamic programming and data expansion was employed to calculate the state-action value function. Following the value function, drivers can determine the best choice and then quantify the expected future reward at a particular state. By utilizing historic orders data in Chengdu, we analysed the function value’s spatial distribution and demonstrated how the model could optimize the driving policies. Finally, the realistic simulation of the on-demand platform was built. Compared with other benchmark methods, the results verified that the new model performs better in increasing total revenue, answer rate and decreasing waiting time, with the relative percentages of 4.8%, 6.2% and −27.27% at most. Full article
(This article belongs to the Section Sustainable Transportation)
Show Figures

Figure 1

Back to TopTop