Next Article in Journal
VGGT-Geo: Probabilistic Geometric Fusion of Visual Geometry Grounded Transformer Priors for Robust Dense Indoor SLAM
Previous Article in Journal
Integrating Health Status Transitions and Service Demands: A Spatial Framework for Elderly Care Service Resource Allocation
 
 
Font Type:
Arial Georgia Verdana
Font Size:
Aa Aa Aa
Line Spacing:
Column Width:
Background:
Article

Construction of a Driving Route Inference Model Integrating Road Network Topology and Traffic Dynamics

College of Resources and Environment, Chengdu University of Information Technology, Chengdu 610225, China
*
Author to whom correspondence should be addressed.
ISPRS Int. J. Geo-Inf. 2026, 15(2), 84; https://doi.org/10.3390/ijgi15020084
Submission received: 8 December 2025 / Revised: 30 January 2026 / Accepted: 13 February 2026 / Published: 16 February 2026
(This article belongs to the Topic Geospatial AI: Systems, Model, Methods, and Applications)

Abstract

The deployment volume of urban surveillance cameras has reached hundreds of thousands or even millions with the advancement of intelligent transportation systems (ITSs), indicating an enormous scale. However, the number of small-field-of-view surveillance cameras in large-scale traffic areas is insufficient to achieve full coverage of urban traffic zones. In the fields of ITSs, this study proposes a traffic information-based driving route inference method to clarify target vehicles’ paths in zones with monitoring blind spots and enhance the collaborative capability between surveillance cameras and traffic networks. First, this study maps traffic roads containing monitoring blind spots and their topologies into Bayesian network (BN) structures. The influencing factors of the target vehicle path can be analyzed, extracted, and quantified by the known data in a traffic network. A weight analysis method is utilized to estimate the weight coefficients of the influencing factors on the basis of the traditional BN model, thereby realizing the driving routes based on traffic networks. This study conducted experiments in Xinbei District, Changzhou City, and Jiangsu Province, China. Experimental results verify that the proposed method can accurately infer and reconstruct driving routes with monitoring blind zones. This method can provide theoretical support for analyzing driving directions at complex traffic intersections and enabling driving route inference in traffic network areas with monitoring blind spots.

1. Introduction

With the rapid advancement of intelligent transportation systems (ITSs), the deployment volume of urban surveillance cameras has surged to hundreds of thousands or even millions, forming an enormous-scale monitoring infrastructure that serves as the core support for traffic monitoring and public security management in modern cities. Despite the large-scale popularization of such devices, the dominant small-field-of-view surveillance cameras in large-scale traffic areas are inherently insufficient to achieve full coverage of urban traffic zones. The restricted field-of-view of individual devices creates pervasive and significant blind zones, which severely hinder the ability to conduct comprehensive and continuous vehicle tracking across large-scale transportation networks.
This unresolved coverage gap constitutes a critical research bottleneck in the field of intelligent transportation and traffic management, rather than a trivial technical limitation. Existing studies have made progress in vehicle trajectory analysis and traffic monitoring, but they often overlook the practical predicament of incomplete monitoring coverage caused by small-field-of-view cameras, a core issue that renders most trajectory-based applications unfeasible in real-world scenarios. Specifically, for key applications ranging from real-time traffic flow analysis, congestion early warning, and traffic accident traceability to law enforcement for traffic violations, reconstructing a complete vehicle trajectory (especially in unobserved road segments) is not only essential but also a prerequisite for ensuring the reliability and effectiveness of these tasks. The failure to address this blind zone problem directly restricts the translation of ITS technological advancements into practical improvements in urban traffic governance, making it imperative to develop targeted solutions. This study thus focuses on this critical gap, with the primary motivation of overcoming the limitations of incomplete monitoring coverage to unlock the full potential of ITSs in smart city traffic management.
To deduce continuous driver path selection, researchers have proposed various methods. Kuge proposed an HMM-based driving behavior recognition method to infer intentions, such as lane changing and deceleration [1]. Brakatsoulas developed a road network-matching algorithm that compared actual trajectory data with candidate routes based on curve similarity, selecting the route with the highest similarity as the one actually adopted by the driver [2]. Subsequently, probabilistic map-matching methods gradually became the mainstream. This shift reflects the increasing recognition that uncertainty arising from noisy sensing data and incomplete observations must be explicitly modeled in route inference. For instance, the Hidden Markov Model framework proposed by Newson can determine the most likely route under noisy and low-sampling conditions [3]. Lou constructed a global candidate graph by incorporating spatiotemporal constraints and road topological information, enabling robust matching of low-sampling trajectories [4]. These early studies primarily focused on recovering driver routes or intentions from sparse observations, laying the foundation for subsequent probabilistic path inference research.
Building upon early route inference research, subsequent studies have further advanced the modeling of uncertainty, dynamic behaviors, and complex traffic environments. Gindele proposed a probabilistic model for estimating driver behaviors and vehicle trajectories in traffic environments, enabling the incorporation of driver-related uncertainty into behavior and trajectory inference [5]. Bierlaire proposed a shortest path approach that identified candidate routes via map matching and assigned higher probabilities to shorter routes to reflect optimal route-seeking behavior [6]. Houenou presented an innovative route prediction framework that combined a constant yaw rate-acceleration motion model with driving intention recognition for route inference [7]. However, most of these approaches were designed for relatively structured traffic scenarios and exhibited limited robustness when confronted with complex traffic interactions or highly dynamic environments. Schreier developed a Bayesian, maneuver-based approach for long-term trajectory prediction and criticality assessment for driver assistance systems, explicitly modeling driving maneuvers and decision-making processes [8]. From a methodological perspective, Bayesian networks—especially dynamic Bayesian networks—have been widely recognized as an effective probabilistic framework for modeling driver behavior under uncertainty due to their ability to integrate heterogeneous observations, latent behavioral states, and temporal dependencies [9,10]. Lefèvre indicated that balancing model robustness and real-time performance in multi-target scenarios was often challenging, and uncertainty in complex environments remained a key bottleneck for route inference [11]. Accordingly, Bayesian and maneuver-based models were introduced to better capture behavioral uncertainty and decision-making processes over longer temporal horizons. Xie developed a driving behavior prediction method based on the interactive multiple model filter, generating final driving behavior prediction results by fusing maneuver behavior models and physical models [12]. Qiao developed a route prediction algorithm based on Prefix-Projection, which dynamically mapped historical movement patterns to future moments through the Prefix-Projection framework to realize efficient prediction of continuous movement routes [13]. Ringhand investigated the effects of complex traffic situations on route choice behaviour and driver stress in residential areas, demonstrating that traffic environment complexity can significantly influence driver psychological states and route selection [14]. These findings indicate that driver route selection is not solely governed by kinematic or topological factors but is also influenced by psychological and situational conditions. From a methodological perspective, these studies not only enriched the methodologies for driving intention inference but also provided new insights into analyzing driver psychological changes and behavioral decisions in complex traffic environments.
Recent research trends, driven by technological advances, have shifted toward multi-source data fusion, advanced computational models, and the incorporation of psychological and environmental factors. Li proposed a multi-modal feature fusion framework from the perspectives of radar data and video features to address information loss of single data sources in complex scenarios [15]. Wu constructed a multi-source fusion prediction model based on IoV data, achieving high-precision prediction of future vehicle routes by integrating traffic flow density, real-time speed, and intersection signal characteristics [16]. Jiang proposed a dynamic BN model integrating driving psychology, operational behavior, and vehicle dynamics to infer instantaneous driver behavior intentions [17]. Katariya proposed the DeepTrack deep learning algorithm, which leveraged a lightweight convolutional neural network to achieve real-time prediction of driving intentions [18]. Liu and Meidani proposed an end-to-end heterogeneous graph neural network framework for traffic assignment, explicitly modeling heterogeneous road network structures [19]. Xu noted that large language models demonstrated potential in scene semantic understanding and temporal reasoning, which might become a new direction for future trajectory prediction and driving intention modeling [20]. Liu and Meidani further developed a multi-view heterogeneous graph attention network to address multi-class traffic assignment by jointly modeling different vehicle classes and traffic states [21]. Despite these advances, most existing methods implicitly assume sufficient sensing coverage and overlook the trajectory reconstruction challenges caused by monitoring blind zones in urban street environments. Zhang integrated individual preferences for path choices into the HMM framework, comprehensively modeling distance, direction, semantic road attributes, and driving behavior factors to enhance personalized matching accuracy [22]. Meanwhile, a recent domestic review on trajectory prediction and driving intention recognition provides directional references for future research in this field [23]. Overall, existing research has accomplished significant progress in route prediction, intention recognition, and multi-source fusion. However, few studies have explicitly targeted the trajectory reconstruction problem in monitoring blind zones for urban streets derived from small-field-of-view cameras, a critical gap closely linked to the aforementioned coverage bottleneck. Moreover, the quantitative analysis of traffic network structural characteristics and real-time traffic flow dynamics remains inadequate, and the weight relationship between driving psychology and environmental factors lacks systematic modeling. These deficiencies further hinder the practical application of existing methods (including advanced GNN-based models) in resolving blind zone-related trajectory inference challenges for urban streets, reinforcing the necessity of this study.
Accordingly, the team’s previous study proposed a novel framework by formalizing the driving route inference problem as a weighted Bayesian network (BN) [24]. The key hypothesis was that driving paths selected within blind zones follow probabilistic patterns shaped by measurable traffic network characteristics and driver behavioral tendencies. The framework specifically encompasses three core components: (1) Topology-aware modeling. We established a direct mapping between physical traffic networks (including blind zones) and BN structures, preserving spatial relationships to enable computationally efficient route inference. (2) Quantification of influencing factors. This study identified four critical factors—ranging from road width to intersection complexity—and assigned weights to each based on their relative importance. (3) Conditional probability estimation. This study estimated the conditional probabilities of road segments within the traffic network to quantify the likelihood of drivers choosing specific paths. Based on the aforementioned research, this study introduces a new influencing factor (Local Motion Trajectory) to construct a new Bayesian network model. By decoding the latent relationships between infrastructure design and driver decision-making, this study provides transportation planners with a practical tool to evaluate how the influencing factors may influence traffic distribution, a capability that is critical for advancing smart city development.
The remainder of this study is structured as follows: Section 2 elaborates on the formulation of our weighted Bayesian network (BN) and the corresponding factor-weighting methodology, providing explicit mathematical definitions and operational steps. Section 3 presents a comprehensive set of experimental results, alongside systematic comparative analyses with state-of-the-art methods to verify the effectiveness and superiority of our framework. Finally, Section 4 offers in-depth discussions on the theoretical implications and practical limitations of this research and concludes with key findings and future research directions.

2. Materials and Methods

2.1. Driving Route Inference Method

BN, also known as a belief network, is a probabilistic model based on causal reasoning. This network represents a dependency relationship among random variables using a directed acyclic graph (DAG) structure, making it highly suitable for the expression and inference of uncertain knowledge. In a DAG, nodes denote random variables, directed edges between nodes signify the relationships among these variables, and the strength of dependencies is quantified by conditional probabilities. Furthermore, BNs enable the inference and prediction of specific events by integrating prior knowledge with observed data and datasets.
This study conceptualizes different road segments in a traffic network as BN nodes. Moreover, this study uses the direction of traffic flows to define the sequential relationship between connected nodes. Along the driving direction, let x 1 and x 2 represent the nodes of the preceding and subsequent road segments, respectively. Here, x 1 can be regarded as the parent node of x 2 , while x 2 serves as the child node of x 1 . A conditional probability exists between each child and its parent node, denoted as P x 2 x 1 , which represents the probability that the target vehicle drives from road segment x 1 to road segment x 2 . Furthermore, the joint probability distribution of the entire BN is the product of the conditional probabilities of all nodes (i.e., road segments) that the target vehicle passes through.
Figure 1a illustrates a simple traffic network, where S and E denote the start and end points of the target vehicle, respectively. The numbers 01 to 06 represent the road segment ID. Figure 1b exhibits the DAG extracted from the traffic zone in Figure 1a, which serves as the BN structure corresponding to Figure 1a. Given the conditional probabilities between the adjacent road segments: the driving probability from road segment 01 to road segment 02 is P(02|01) = 60%, the driving probability from segment 01 to segment 03 is P(03|01) = 40%, the driving probability from segment 02 to segment 04 is P(04|02) = 100%, …, and the driving probability from segment 05 to segment 06 is P(06|05) = 100%. The joint probability of the driving route Ijgi 15 00084 i001Ijgi 15 00084 i002Ijgi 15 00084 i004Ijgi 15 00084 i006 can be calculated as P(02|01) × P(04|02) × P(06|04) = 60%, while that of the driving route Ijgi 15 00084 i001Ijgi 15 00084 i003Ijgi 15 00084 i005Ijgi 15 00084 i006 is P(03|01) × P(05|03) × P(06|05) = 40%, based on these conditional probabilities between road segments. A comparison reveals that the joint probability of the route Ijgi 15 00084 i001Ijgi 15 00084 i002Ijgi 15 00084 i004Ijgi 15 00084 i006 is higher than that of the route Ijgi 15 00084 i001Ijgi 15 00084 i003Ijgi 15 00084 i005Ijgi 15 00084 i006. Therefore, the vehicle is likely to travel along the route Ijgi 15 00084 i001Ijgi 15 00084 i002Ijgi 15 00084 i004Ijgi 15 00084 i006, based on the BNs.
When inferring the driving route of target vehicles using BNs, the calculation of the conditional probabilities between different road segments remains a key focus and major challenge in traffic network analysis. This study extracts and analyzes the influencing factors that affect driving intentions and constructs a conditional probability estimation model between different road segments based on BNs. The vehicle route inference in blind zones is realized through the selection of high-probability paths. The proposed method primarily involves the extraction and quantification of the influencing factors and the estimation of the conditional probabilities.

2.1.1. Quantification of the Influencing Factors

This study identifies key influencing factors for selecting driving roads leveraging data from ITS—including surveillance camera data, road attributes, and traffic congestion status—and integrating driver subjective factors, such as driving habits and hobbies. These factors include local vehicle trajectories captured by surveillance cameras, road classification, congestion levels, motion vectors, vehicle density, and so on.
(1)
Local Motion Trajectory
A large number of surveillance cameras have been installed in ITS. We can obtain the instantaneous driving direction of the target vehicle at the monitoring points through the surveillance videos. 2D maps possess spatial positions and directions. When instantaneous videos are mapped onto a 2D map, the driving direction of the target vehicle can be identified from the instantaneous videos. This mechanism further enables the determination of whether the target vehicle is entering or exiting the road segment where the surveillance camera is installed. When the target vehicle and its trajectories are identified from the instantaneous videos, the value of the instantaneous driving direction for that road segment is defined as one; otherwise, it is defined as zero.
(2)
Road Grade
The Planning and Design of Urban Road Traffic authority file specifies that road grades are classified by road widths. A high road grade corresponds to a wide road segment, which can provide enhanced safety and a comfortable driving experience. Consequently, drivers prefer road segments with high road grades for driving.
This study selects the width Wi of a road segment as the basis for determining the road grade. Formula (1) is used to normalize a road width, yielding the normalized road grade Wi′ (where Wi′∈ [0, 1]). A large Wi′ indicates a wide road segment and a strong intention for drivers to prioritize that road segment.
W i = W i W m i n W m a x W m i n ,
where Wi and Wi′ represent the width of the ith road segment before and after normalization, respectively, and W m a x and W m i n denote the maximum and minimum widths of all road segments involved within the entire transportation area, respectively.
(3)
Congestion Level
Prolonged traffic congestion not only increases energy consumption but also significantly impairs the driving experience. During driving, drivers typically select relatively smooth road segments to optimize their entire driving process. Accordingly, the congestion index can be considered a relatively important influencing factor for evaluating driver routes. The congestion index is a critical indicator that comprehensively reflects road congestion, with a value range of 0–10. A high value indicates severe traffic congestion.
This study first computes the average driving speed v ¯ over a specific time interval to calculate the congestion index of every road segment. In Equation (2), v i ¯ denotes the average driving speed of the ith road segment; L i represents the length of the ith road segment; N i is the number of vehicles measured on the ith road segment; t i j indicates the time taken by the jth vehicle to pass through the ith road segment.
v i ¯ = N i × L i j = 1 N t i j ,
Subsequently, with reference to the traffic congestion degree evaluation criteria specified in the Road Traffic Congestion Degree Evaluation Method of the People’s Republic of China, the congestion index S i of each road segment is obtained by matching the speed–congestion index correlation in Table 1, based on the average driving speed v ¯ .
The normalized congestion index S i can be calculated using Equation (3) to obtain the road congestion degree S i , with the segment-averaged driving speed v ¯ as the input variable. The normalization process standardizes the raw congestion index S i to a dimensionless value S i , where high values indicate severe congestion:
S i = v ¯ v m i n 4 × v m a x v m i n s e v e r e   c o n g e s t i o n v ¯ v m i n 4 × v m a x v m i n + 1 4 m o d e r a t e   c o n g e s t i o n v ¯ v m i n 4 × v m a x v m i n + 1 2 s l i g h t   c o n g e s t i o n v ¯ v m i n 4 × v m a x v m i n + 3 4 s m o o t h   t r a f f i c ,
where v m a x and v m i n represent the endpoints of the speed interval used for congestion assessment. The congestion level S i ∈ [0, 1] exhibits an inverse relationship with traffic severity. High S i values (approaching one) indicate smooth traffic and strong driver tendency. Meanwhile, low values (approaching zero) denote severe congestion.
(4)
Motion Vector
When drivers choose a road segment at the road intersection for the next moment, they usually prefer the one closest to their destination. This study introduces the concept of a motion vector to analyze whether the vehicle is moving toward the target destination and evaluates the correlation between driving and destination directions.
The methodology includes:
(1)
Defining the target direction as the vector connecting origin (O) and destination (D);
(2)
Measuring the included angle A (∈ [0°, 180°]) between the current driving direction vector and target direction vector (O → D), which can effectively reflect the intention of a driver.
When angle A is small, the driving direction is close to the target direction. Meanwhile, the vehicle may deviate from the target direction or take a detour. This study utilizes a cosine-based normalization scheme to derive the motion vector parameter A i 0 , 1 , as specified in Equation (4), and establishes an inverse relationship between directional deviation and route selection probability.
A i = cos A i 2 ,
When A i → 1, this indicates strong alignment with the destination direction (A → 0°). When A i → 0, this signifies substantial deviation from the target path (A → 180°).
(5)
Vehicle Density
Vehicle density, which is used to quantify spatial vehicular concentration, serves as a critical decision parameter for drivers. Drivers typically choose a road segment with low traffic density. Low-density road segments can effectively avoid peak-hour congestion and enhance safety margins for emergency maneuvers. This factor K i in the ith road segment can be computed as the ratio of traffic flow Q i to average speed v i ¯ (Equation (2)) in Equation (5).
K i = Q i v i ¯ ,
At a road intersection, drivers typically evaluate and compare vehicle densities across different approaching segments to inform their driving route selection decisions. This study applies min–max normalization (Equation (6)) to the vehicle densities of all connecting road segments at a given intersection to quantitatively model this decision-making process.
K i = K i K m i n K m a x K m i n ,
where K i represents the original vehicle density (veh/km) of the ith approaching road; K i ∈ [0, 1] denotes the normalized vehicle density; K m a x and K m i n indicate the maximum and minimum observed vehicle densities among all connecting roads, respectively. The normalized vehicle density K i exhibits an inverse relationship with driver tendency. When K i is close to one, this indicates low relative vehicle density (preferred route). Meanwhile, K i → 0 represents high relative vehicle density (less preferred).

2.1.2. Conditional Probability Estimation

The traditional BNs can quantify driver route selection probabilities at traffic intersections through multivariate influencing factors. Equation (7) is used to calculate the conditional probability P i of every road segment at a traffic intersection by systematically integrating the key influencing factors (local motion trajectory D i , normalized road grade index W i , standardized congestion level S i , normalized motion vector A i , relative vehicle density K i , and the other influencing factor O i ). P i reflects the magnitude of the driving tendency, and the continuous driving route in blind spots is confirmed by connecting the road paths of the highest conditional probabilities.
P i = D i + W i + S i + A i + K i + O i i = 1 n D i + W i + S i + A i + K i + O i ,
The driving direction of the target vehicle paths in the surveillance videos provides deterministic evidence for road selections. Therefore, the weight of the local motion trajectory D i is important, and the proposed method first formalizes D i for each road segment in the traffic zone.
Let D i ∈ {0,1} be the trajectory observation flag for road segment i. When D i = 1, the target vehicle is observed on road segment i, and this provides conclusive evidence of a road segment selection (100% probability). When D i = 0, the target vehicle is not observed on the road segment i.
This study incorporates weight coefficients n i to develop a weighted BN and account for the differences of the influencing factors on road segment selection. The mathematical formulation is presented as follows:
P i j = D i + 1 D i × n 1 × W j + n 2 × S j + n 3 × A j + n 4 × K j + n 5 × O j j = 1 n n 1 × W j + n 2 × S j + n 3 × A j + n 4 × K j + n 5 × O j ,
where i denotes the ith traffic intersection starting from the initial node in the traffic region and j represents the jth road segment corresponding to the ith traffic intersection, where i = 1, 2, …, m and j = 1, 2, …, n; P i j represent the conditional probability that a driver selects the jth road segment at the ith traffic intersection; D i denotes the feasibility of identifying the vehicles from the surveillance video that are traveling on the jth road segment, where D i = 1 (feasible) or D i = 0 (infeasible); W j stands for the road class of the jth road segment; S j indicates the congestion level of the jth road segment; A j represents the driving motion vector of the jth road segment; K j denotes the vehicle density of the jth road segment; O j represents the other influencing factor affecting the driving intention for the jth road segment; and n 1 n 5 denote the weight coefficients of the driving intention influencing factors, including W j , S j , A j , K j , and O j , and their values are obtained through gradual iterative optimization during the model deduction process, with reference to the sensitivity analysis. The specific procedure is illustrated in Figure 2.
When the local motion trajectory is zero, the weight coefficients n 1 n 5 can be calculated (Figure 2). Firstly, the values W j , S j , A j , K j , and O j are first acquired from the traffic data, and their initial weight coefficients should be 1/n, where n = 5 is the number of influencing factors. Therefore, the initial weight coefficients of W j , S j , A j , K j , and O j are set to n 1,2 , 3,4 , 5 = 0.2 . These 10 parameters (i.e., five influencing factors and five initial weight coefficients) are substituted into Equation 8 to calculate P i j . Then, a judgment is reached based on the BN structure and the obtained 10 parameters to determine whether the road segment connected by the calculated high-probability P i j coincides with the actual driving route. If the calculated route matches the actual one, then the final values of n 1 n 5 are obtained. Otherwise, the values of n 1 n 5 are adjusted with a step size of 0.001. Thereafter, n 1 n 5 is calculated. Subsequently, the coincidence between the recalculated route and the actual route is verified again following this adjustment. This iterative loop is repeated until the optimized weight coefficients n 1 n 5 are achieved.

2.2. Algorithm Flow of the Weighted BN Method

Aiming at the key challenge of conditional probability calculation between road segments in driving path inferences using BNs, this study proposes a weighted BN-based conditional probability estimation method to improve the accuracy of blind-zone vehicle route deduction. The core logic of the proposed method is to integrate BN advantages in uncertain knowledge expression and inference with multi-dimensional driving intention influencing factors and introduce weight coefficients to quantify the differential impacts of various factors on road segment selections, thereby optimizing the conditional probability calculation results and forming the driving route inference of blind zones.
The Algorithm 1 follows a logical sequence of “data acquisition → factor quantification → weight optimization → probability calculation → driving route deduction”, and the specific steps are as follows (combined with Figure 2):
Algorithm 1. The Weighted BN Algorithm
Input: Surveillance videos, traffic attribute data (width, length, speed limit), traffic operation data (vehicle passing time, traffic flow), vehicle origin (O) and destination (D), and the existing actual driving routes.
Step:
1.  Bayesian network structures: Based on the moving directions in the transportation network, determine the driving relationships between the road segments in the study area. After that, it is possible to start from the vehicle origin (O) and describe the BN structure and traffic intersections that includes the vehicle destination (D).
2.  Multi-source Data Acquisition: Extract target vehicle trajectories from surveillance videos; collect and organize traffic attributes and traffic operations.
3.  Factor Quantification and Normalization: For each road segment, calculate and normalize influencing factors:
    a. Set local motion trajectory flag Di′ (1 if the target vehicle is observed; 0 otherwise);
    b. Compute normalized road grade Wj′ via Formula (1);
     c .   Calculate   average   speed   v ¯ via Formula (2), match congestion index Si using Table 1, and then get standardized congestion level Sj′ via Formula (3);
    d. Compute the included angle between driving direction and O → D and then get normalized motion vector Aj′ via Formula (4);
    e. Calculate vehicle density K via Formula (5) and then get normalized vehicle density Kj′ via Formula (6);
    f. Quantify and normalize other factors Oj′ (consistent with above logic).
4.  Weight Coefficient Initialization: Set weight coefficients n1,2,3,4,5 = 0.25 (corresponding to Wj′,Sj′,Aj′,Kj′,Oj′ respectively).
for  i = 1,…, m (m is the number of traffic intersections) do
5.  Conditional Probability Calculation: Determine the ith traffic intersection and its corresponding candidate road segments.
for  j = 1,…, n (n is the number of candidate road segments at ith intersection) do
6.  Scenario-based Probability Estimation: Calculate conditional probability Pij via Formula (8):
    a. If Di′ = 1, set Pij = 1;
    b. If Di′ = 0, compute Pij by weighting Wj′,Sj′, Aj′,Kj′,Oj′ with n1-n5.
7.  Weight Iterative Optimization: Verify if the high-probability path (determined by Pij) matches the existing actual driving route; if not, adjust n1-n5 by step 0.001 and return to Step 5; if yes, fix the optimal weights.
end for
end for
8. Blind-Zone Driving Route Deduction: For blind zones (Di′ = 0), calculate Pij of each candidate road segment using optimal weights; select the segment with the highest Pij as the next driving segment.
Output: Complete the driving route of the target vehicle (including blind-zone segments)

2.3. Data Sources

This study selected the experimental area as Taihu Middle Road and its surrounding regions in Xinbei District, Changzhou City, and Jiangsu Province, China (Figure 3). The fundamental dataset was categorized into two types: one was the vector map of the traffic region extracted from high-resolution Google remote sensing images (with a resolution of 0.25 m); the other type included six traffic surveillance videos acquired from the Changzhou Municipal Public Security Bureau, with numbers 027, 080, 256, 262, and 142.
The experiment firstly projected the surveillance videos onto the 2D vector map based on the traffic vector map and surveillance videos. Thereafter, geographic coordinates were assigned to each pixel in the surveillance videos. Subsequently, the PaddleDetection vehicle recognition model was utilized to identify parameters, such as vehicle number, driving trajectories, driving distance, and driving time, in the surveillance videos, thereby deriving the vehicle driving direction, traffic flow, vehicle speed, and other parameters required for this experiment. Additionally, the experiment used the built-in distance measurement tool of the Google Online Maps Platform to extract the road segment lengths and widths. Table 2 presents the extracted parameters of some road segments.

3. Results

3.1. Experimental Process

Suppose a target vehicle was tracked (Figure 4) in the experimental region. The target vehicle was first observed by surveillance camera 027 near Taihu Middle Road in Xinbei District and later appeared on surveillance camera 142 before disappearing. The experiment conducted calculations from three aspects to restore the actual driving route of this vehicle in the Taihu Middle Road region: BN structure extraction, influencing factor quantification, and conditional probability calculation, thereby obtaining the possible driving routes of the target vehicle.

3.1.1. BN Structure Extraction

In this experiment, the starting road segment 01 was taken as the initial road segment of the inference network, with destination road segment 33 as the target road segment and the remaining independent road segments in the region as intermediate nodes. Based on the direction of road operation, the DAG (Figure 5b) can be expressed from Figure 5a, making it highly suitable for the expression and inference of uncertain driving paths. Here, road segment denotes random variables, directed edges between nodes signify the driving direction among these road segments, and the strength of dependencies is quantified by conditional probabilities. Let 01 be the parent node of all road segments, with 02 and 03 as its child nodes. When 02 served as the parent node, 04 and 07 became its child nodes, and so on, until 32 and 33 were regarded as the child nodes of 29 and 34. Furthermore, BNs enable the correlation of road segments. The specific structural diagram is shown in Figure 5c.

3.1.2. Influencing Factor Quantification

Five influencing factors are identified in Section 2, and their quantification processes are described as follows:
(1)
Local Motion Trajectory. After projecting the surveillance videos of the traffic region onto a 2D map, the driving trajectories of the target vehicle in the surveillance videos are completely consistent with the road directions on the 2D map. Accordingly, whether the target vehicle enters or exits a certain road segment can be determined according to the vehicle trajectory in the surveillance video. If a surveillance camera is installed on a road segment, and the entering or exiting direction of the target vehicle is identified from the surveillance video, then the instantaneous driving direction of the target vehicle on this road segment is set to one; otherwise, it is assigned a value of zero. The specific results are shown in the second column of Table 3.
(2)
Road Grade. The experiment first obtained the road segment width from Table 2 to obtain the road grade of each road segment. Thereafter, the maximum and minimum values of all road segments involved in Figure 3 are determined. Finally, the normalized road grade index is calculated for each road segment using Formula (2). The specific results are shown in the third column of Table 3.
(3)
Congestion Level. The experiment used specific information, such as the number of vehicles identified per hour and road segment length from Table 2, to obtain the congestion level of each road segment. During the calculation of the average vehicle speed using Formula (2), this experiment referred to Table 1 to obtain congestion level Si. Thereafter, the road congestion level was calculated using Formula (3). The specific results are shown in the fourth column of Table 3.
(4)
Motion Vector. The experiment firstly identified all the locations of the target vehicle appearing in the surveillance videos to obtain the normalized motion vector of each road segment and sorted them in a chronological order, with the sorting result being 027, 080, 262, 256, and 142. Secondly, this experiment determined the direction lines between the starting point and 027, 027; 080, 080; 262, 262; 256, 252; and 142, with 142 being the ending point. Thereafter, this experiment calculated the angle between each direction line and the driving direction of a specific road segment. Furthermore, this experiment realized the normalization calculation using Formula (4). For example, the motion vector of road segment 14 in Figure 6 could be calculated based on the angle between the direction of road segment 14 and the direction from 080 to 262. The specific results are shown in the fifth column of Table 3.
(5)
Vehicle Density. The experiment first calculated vehicle density K using Formula (5) based on the traffic flow and vehicle speeds of some road segments obtained from Table 2 to obtain the relative vehicle density for each road segment. Considering that a single intersection has multiple road segments, the experiment calculated the maximum and minimum vehicle densities of the different road segments. Thereafter, Formula (6) was used to compute the normalized vehicle densities. The specific results are shown in the last column of Table 3.

3.1.3. Probability of Driving Tendency

The experiment tracked the actual driving routes of 100 vehicles in varying time periods to quantify the influence degree of the different influencing factors on driver tendency. These 100 driving routes were divided into peak and off-peak hours. The weight coefficients of the influencing factors in different time periods were obtained through a step-by-step iterative refinement using Figure 2. The specific results are shown in Table 4.
The specific model of the weighted BNs could be obtained by substituting the weight coefficients in Table 4 into Formula (8), as shown in Formula (9).
P i = D j + 1 D j × 0.143 × W i + 0.209 × S i + 0.556 × A i + 0.092 × K i i = 1 n 0.143 × W i + 0.209 × S i + 0.556 × A i + 0.092 × K i P e a k   h o u r s P i = D j + 1 D j × 0.124 × W i + 0.155 × S i + 0.639 × A i + 0.082 × K i i = 1 n 0.124 × W i + 0.155 × S i + 0.639 × A i + 0.082 × K i O f f p e a k h o u r s ,

3.2. Experimental Results and Analysis

Based on the starting point of road segment 01 and the destination of road segment 33 as defined in our experimental setup, the driving probabilities for the target vehicle on each corresponding road segment (nodes in Figure 5) can be derived. The inference process follows a sequential greedy logic: starting from road segment 01, at each intersection, the probability values of all connected road segments are compared, and the segment with the highest probability is selected to proceed. This stepwise selection is repeated iteratively at subsequent intersections until the destination road segment 33 is reached.
This experimental procedure was implemented computationally by substituting the relevant road segment data (numbered 01 to 33, as listed in Table 2) into Formula (9). For comparative analysis, the probability distribution outcomes presented in Table 5 were calculated using the traditional Bayesian network method (Formula (8)). As illustrated in Figure 7, the driving route inferred by the proposed method is derived by sequentially connecting the road segments with the highest conditional probabilities according to the deduction network structure, resulting in the following path:
01 → 02 → 04 → 14 → 15 → 16 → 18 → 21 → 23 → 25 → 27 → 29 → 33.
All computations were performed on a standard laptop equipped with an Intel Core i5-9300H processor (2.4 GHz, 4 cores), 16 GB of RAM, and the Windows 10 operating system. The computational cost of the proposed method primarily depends on the complexity of the origin–destination (OD) pair, mainly determined by the quantity and duration of surveillance videos to be processed, followed by the number of intersections and road segments along the candidate routes. For the identified optimal route between segments 01 and 33—which involves processing videos from 6 surveillance cameras (totaling 32 min or 1920 s), 12 intersections, and 24 road segments—the total computation time was approximately 1736 s. Within this total, video-based vehicle recognition accounted for 1728 s (based on extracting one frame every 2 s for recognition, with each frame requiring 1.8 s of processing time). In comparison, the traditional Bayesian method completed the inference for the same scenario in only 7.8 s. Thus, while the video recognition component is time-intensive, the core inference cost of the proposed method, excluding video processing, is comparable to that of the traditional BN.
It is important to note that the proposed method is designed to simulate and reconstruct the trajectory of a target vehicle in blind-zone areas by integrating surveillance camera data across the network. Although the target detection process in surveillance videos is computationally intensive, the integration of multi-source video data and the weighted modeling of influencing factors allow this method to achieve significantly higher accuracy in road network analysis within blind zones, justifying the trade-off in processing time for applications where precision is critical.

4. Conclusions

This study proposed a vehicle route inference method based on traffic blind zones. By integrating multiple influential factors—including local motion trajectory, road classification, congestion degree, motion vector, and vehicle density—this method effectively enhances the weight of such factors in driving route deduction.
The proposed method is capable of analyzing, extracting, and performing weighted quantification of the influencing factors, including the roadside environment and traffic flow conditions. Compared with traditional BNs, the proposed method in this study takes into account the weights of the influencing factors, thus enabling it to accurately predict the driving trajectory of the target vehicle. However, the influence factor weights of the proposed method require a large amount of real vehicle trajectory data to be obtained. For an unknown research area, when there is no large amount of real trajectory data, it is difficult to estimate the weight of the model’s influencing factors and construct its weighted BN model.
This method accurately reveals driving route selection in the monitoring of blind spots and significantly improves the accuracy of driving route selection and blind area route reproduction. In future research, this study will integrate the traffic rules and driving decision changes of drivers under different traffic signal timing schemes. Accordingly, the proposed method can fully adapt to the complex and changeable actual urban traffic environment and provide powerful technical support and data guarantee for the efficient operation of intelligent transportation systems.

5. Patents

There is a Chinese patent resulting from the work reported in this manuscript. The patent title is “A vehicle trajectory prediction method for large blind areas in surveillance”, and its number is ZL 2025 1 0160516.0.

Author Contributions

Conceptualization, Yuxia Bian and Jinbao Liu; methodology, Yuxia Bian; software, Xiaolong Su; validation, Yuxia Bian, Jinbao Liu and Yuanjie Tang; formal analysis, Yuxia Bian; investigation, Yuanjie Tang; resources, Yuxia Bian; data curation, Xiaolong Su; writing—original draft preparation, Yuxia Bian; writing—review & editing, Yuxia Bian and Yuanjie Tang; visualization, Yuxia Bian; supervision, Yuxia Bian and Jinbao Liu; project administration, Yuxia Bian; funding acquisition, Yuxia Bian and Jinbao Liu. All authors have read and agreed to the published version of the manuscript.

Funding

This research was funded by the National Natural Science Foundation of China, grant number 41601422; the Sichuan Provincial Science and Technology & Education Joint Fund, grant number 2025NSFSC2089; the Open Project of the Key Laboratory of Atmospheric Environment Simulation and Pollution Control of Sichuan Provincial Institutions of Higher Education, grant number KFKT-YB-202404; and the University-Level Project for Enhancing Teachers’ Technological and Innovative Capabilities, grant number KYQN202309.

Data Availability Statement

No new data were created or analyzed in this study. Data sharing is not applicable to this article. The data supporting the findings were provided by the Changzhou Public Security Bureau of China and are subject to confidentiality restrictions. Due to the sensitive and confidential nature of the data, which is related to public security and privacy protection, the datasets are not publicly available. Requests to access the datasets should be directed to the Changzhou Public Security Bureau of China with appropriate authorization and compliance with relevant national and institutional regulations.

Conflicts of Interest

The funders had no role in the design of the study; in the collection, analyses, or interpretation of data; in the writing of the manuscript; or in the decision to publish the results.

References

  1. Brakatsoulas, S.; Pfoser, D.; Salas, R.; Wenk, C. On map-matching vehicle tracking data. In Proceedings of the 31st International Conference on Very Large Data Bases, Trondheim, Norway, 30 August–2 September 2005; pp. 853–864. [Google Scholar]
  2. Bierlaire, M.; Chen, J.; Newman, J. A probabilistic map matching method for smartphone GPS data. Transp. Res. Part C 2013, 26, 78–98. [Google Scholar] [CrossRef]
  3. Newson, P.; Krumm, J. Hidden Markov map matching through noise and sparseness. In Proceedings of the 17th ACM SIGSPATIAL International Conference on Advances in Geographic Information Systems, Seattle, WA, USA, 4–6 November 2009; pp. 336–343. [Google Scholar]
  4. Zhang, Y.; Yan, H.; Lu, X. An enhanced HMM map matching algorithm incorporating personal road selection preferences. Sci. Rep. 2025, 15, 35962. [Google Scholar] [CrossRef] [PubMed]
  5. Lou, Y.; Zhang, C.; Zheng, Y.; Xie, X.; Wang, W.; Huang, Y. Map-matching for low-sampling-rate GPS trajectories. In Proceedings of the 17th ACM SIGSPATIAL International Conference on Advances in Geographic Information Systems, Seattle, WA, USA, 4–6 November 2009; pp. 352–361. [Google Scholar]
  6. Katariya, V.; Baharani, M.; Morris, N.; Shoghli, O.; Tabkhi, H. DeepTrack: Lightweight deep learning for vehicle trajectory prediction in highways. IEEE Trans. Intell. Transp. Syst. 2022, 23, 18927–18936. [Google Scholar] [CrossRef]
  7. Xie, G.; Gao, H.; Qian, L.; Huang, B.; Li, K.; Wang, J. Vehicle trajectory prediction by integrating physics- and maneuver-based approaches using interactive multiple models. IEEE Trans. Ind. Electron. 2017, 65, 5999–6008. [Google Scholar] [CrossRef]
  8. Houenou, A.; Bonnifait, P.; Cherfaoui, V.; Yao, W. Vehicle trajectory prediction based on motion model and maneuver recognition. In Proceedings of the 2013 IEEE/RSJ International Conference on Intelligent Robots and Systems, Tokyo, Japan, 3–7 November 2013; pp. 4363–4369. [Google Scholar]
  9. Xie, G.; Gao, H.; Huang, B.; Qian, L.; Wang, J. A Driving Behavior Awareness Model based on a Dynamic Bayesian Network and Distributed Genetic Algorithm. Int. J. Comput. Intell. Syst. 2018, 11, 469–482. [Google Scholar] [CrossRef]
  10. Ma, J.; Xie, H.; Song, K.; Liu, H. A Bayesian Driver Agent Model for Autonomous Vehicles System Based on Knowledge-Aware and Real-Time Data. Sensors 2021, 21, 331. [Google Scholar] [CrossRef] [PubMed]
  11. Qiao, S.; Han, N.; Wang, J.; Li, R.H.; Gutierrez, L.A.; Wu, X. Predicting long-term trajectories of connected vehicles via the prefix-projection technique. IEEE Trans. Intell. Transp. Syst. 2017, 19, 2305–2315. [Google Scholar] [CrossRef]
  12. Lefèvre, S.; Vasquez, D.; Laugier, C. A survey on motion prediction and risk assessment for intelligent vehicles. ROBOMECH J. 2014, 1, 1. [Google Scholar] [CrossRef]
  13. Kuge, N.; Yamamura, T.; Shimoyama, O.; Liu, A.M. A driver behavior recognition method based on a driver model framework. SAE Trans. 2000, 109, 469–476. [Google Scholar]
  14. Jiang, Y.; Zhu, B.; Yang, S.; Zhao, J.; Deng, W. Vehicle trajectory prediction considering driver uncertainty and vehicle dynamics based on dynamic Bayesian network. IEEE Trans. Syst. Man Cybern. Syst. 2022, 53, 689–703. [Google Scholar] [CrossRef]
  15. Gindele, T.; Brechtel, S.; Dillmann, R. A probabilistic model for estimating driver behaviors and vehicle trajectories in traffic environments. In Proceedings of the 13th International IEEE Conference on Intelligent Transportation Systems, Funchal, Portugal, 19–22 September 2010; pp. 1625–1631. [Google Scholar]
  16. Schreier, M.; Willert, V.; Adamy, J. Bayesian, maneuver-based, long-term trajectory prediction and criticality assessment for driver assistance systems. In Proceedings of the 17th International IEEE Conference on Intelligent Transportation Systems (ITSC), Qingdao, China, 8–11 October 2014; pp. 334–341. [Google Scholar]
  17. Ringhand, M.; Vollrath, M. Effect of complex traffic situations on route choice behaviour and driver stress in residential areas. Transp. Res. Part F 2019, 60, 274–287. [Google Scholar] [CrossRef]
  18. Wu, C.Y.; Hu, M.B.; Jiang, R.; Hao, Q.Y. Effects of road network structure on the performance of urban traffic systems. Phys. A Stat. Mech. Its Appl. 2021, 563, 125361. [Google Scholar] [CrossRef]
  19. Li, G.; Lai, W.; Sui, X.; Li, X.; Qu, X.; Zhang, T.; Li, Y. Influence of traffic congestion on driver behavior in post-congestion driving. Accid. Anal. Prev. 2020, 141, 105508. [Google Scholar] [CrossRef] [PubMed]
  20. Wang, T.; Xiao, D.; Xu, X.; Yuan, Q. A Survey on Vehicle Trajectory Prediction Procedures for Intelligent Driving. Sensors 2025, 25, 5129. [Google Scholar] [CrossRef] [PubMed]
  21. Xu, Y.; Yang, R.; Zhang, Y.; Lu, J.; Zhang, M.; Wang, Y.; Su, L.; Fu, Y. Trajectory Prediction Meets Large Language Models: A Survey. arXiv 2025, arXiv:2506.03408. [Google Scholar] [CrossRef]
  22. Liu, T.; Meidani, H. End-to-end heterogeneous graph neural networks for traffic assignment. Transp. Res. Part C Emerg. Technol. 2024, 165, 104695. [Google Scholar] [CrossRef]
  23. Liu, T.; Meidani, H. Multi-class traffic assignment using multi-view heterogeneous graph attention networks. Expert Syst. Appl. 2025, 286, 128072. [Google Scholar] [CrossRef]
  24. Bian, Y.; Zhu, Z.; Zhou, Y.; Li, X. Target vehicle trajectory deduction method based on weighted Bayesian network. Bull. Surv. Mapp. 2025, 8, 89–94. [Google Scholar]
Figure 1. Estimation diagram of BNs. The red arrow is the possible direction of the target vehicle movement in (a), and the numbers 01–05 are the identification codes of road segments in (a,b).
Figure 1. Estimation diagram of BNs. The red arrow is the possible direction of the target vehicle movement in (a), and the numbers 01–05 are the identification codes of road segments in (a,b).
Ijgi 15 00084 g001
Figure 2. Framework for inferring weight coefficients. Orange boxes denote known input data. Gray boxes represent influencing factors. Purple boxes indicate conditional probability computation and the resulting inferred route of the target vehicle. Yellow boxes show the optimized weight coefficients obtained through iterative refinement. If the target vehicle is detected in surveillance video on a given road segment, the conditional probability is computed directly from the video. Otherwise, the system follows the dashed-line procedure: initializing weights, estimating conditional probabilities via a Bayesian network, comparing the inferred route with ground-truth trajectories, and iteratively adjusting weights (in steps of ±0.001) until consistency is achieved.
Figure 2. Framework for inferring weight coefficients. Orange boxes denote known input data. Gray boxes represent influencing factors. Purple boxes indicate conditional probability computation and the resulting inferred route of the target vehicle. Yellow boxes show the optimized weight coefficients obtained through iterative refinement. If the target vehicle is detected in surveillance video on a given road segment, the conditional probability is computed directly from the video. Otherwise, the system follows the dashed-line procedure: initializing weights, estimating conditional probabilities via a Bayesian network, comparing the inferred route with ground-truth trajectories, and iteratively adjusting weights (in steps of ±0.001) until consistency is achieved.
Ijgi 15 00084 g002
Figure 3. Experimental region and its basic data. Numbers 1 to 47 represent road segments divided by traffic intersections and the numbers 027, 080, 256, 262, and 142 correspond to surveillance videos obtained from the Changzhou Public Security Bureau.
Figure 3. Experimental region and its basic data. Numbers 1 to 47 represent road segments divided by traffic intersections and the numbers 027, 080, 256, 262, and 142 correspond to surveillance videos obtained from the Changzhou Public Security Bureau.
Ijgi 15 00084 g003
Figure 4. A white vehicle. There was a target vehicle in the experimental region.
Figure 4. A white vehicle. There was a target vehicle in the experimental region.
Ijgi 15 00084 g004
Figure 5. Traffic region and its BN structure. (a) Traffic region; (b) DAG; (c) BN structure. The red and green Five-pointed stars represent the vehicle origin (O) and destination (D) in (b). The arrows indicate the driving direction between the road segments in (b,c) and the numbers in (ac) are the road section numbers.
Figure 5. Traffic region and its BN structure. (a) Traffic region; (b) DAG; (c) BN structure. The red and green Five-pointed stars represent the vehicle origin (O) and destination (D) in (b). The arrows indicate the driving direction between the road segments in (b,c) and the numbers in (ac) are the road section numbers.
Ijgi 15 00084 g005
Figure 6. A angle. The yellow arrows indicate the traffic direction of road segments or driving target direction. A is a angle between the traffic direction of road segment 14 and the driving target direction from 080 to 262.
Figure 6. A angle. The yellow arrows indicate the traffic direction of road segments or driving target direction. A is a angle between the traffic direction of road segment 14 and the driving target direction from 080 to 262.
Ijgi 15 00084 g006
Figure 7. Driving route of the target vehicle. The gray numbers are the calculated conditional probabilities of the road segments, and the blue line represents the possible route of the target vehicle derived using the proposed method.
Figure 7. Driving route of the target vehicle. The gray numbers are the calculated conditional probabilities of the road segments, and the blue line represents the possible route of the target vehicle derived using the proposed method.
Ijgi 15 00084 g007
Table 1. Correlation between speed and congestion index.
Table 1. Correlation between speed and congestion index.
Unit: Kilometers per Hour (km/h)
Speed LimitAverage Driving Speed and Its Congestion Quantification
80[45, 80)[30, 45)[20, 30)[0, 20)
70[40, 70)[30, 40)[20, 30)[0, 20)
60[35, 60)[30, 35)[20, 30)[0, 20)
50[30, 50)[25, 30)[15, 25)[0, 15)
40[25, 40)[20, 25)[15, 25)[0, 15)
<40[25, limit)[20, 25)[10, 20)[0, 10)
DegreeSmooth trafficSlight congestionModerate congestionSevere congestion
Index   S i 0–2.52.5–55–7.57.5–10
Table 2. Extracted parameters of some road segments.
Table 2. Extracted parameters of some road segments.
TimeRoad NumberSurveillance VideoWidth (m)Length (m)Traffic Volume (vehicles/hour)
8:2401133425188
8:2402030116369
8:2403133108278
8:2604130124371
8:260508.6107124
8:26060612551
8:26070610689
8:24080611473
8:2409033114279
8:241003340238
8:2411014259126
8:241218.6114129
8:241308.642104
8:2614130510352
8:2615011110231
8:2616011118227
8:2417114530163
11:531811480203
11:5319040351370
11:5320014298940
11:5321140130403
8:262207103210
8:2423140122400
8:2424130160268
8:2425140215560
8:2426030204304
8:2427040210348
8:242808.75112121
12:162914023090
12:1630011.73393112
12:1631011.73155154
12:1632133.530985
12:163314028170
12:1634133.523155
8:263508.824275
8:263608.845091
12:1637011.73575112
12:1638033.521875
12:163902022691
12:1640033.525765
8:264108.629781
12:1642011.73263136
8:264308.643179
8:2444033319178
12:1645033.526663
8:2446033217138
12:1647033.540957
Table 3. Quantification result of the influencing factors.
Table 3. Quantification result of the influencing factors.
NumberLocal Motion Trajectory (Di)
Normalized   Road   Grade   Index   ( W i
)
Standardized   Congestion   Level   ( S i
)
Normalized   Motion   Vector   ( A i
)
Relative   Vehicle   Density   ( K i
)
110.7870.23610.753
200.6920.35510.572
300.7870.2580.6530.626
410.6920.35510.572
500.0210.2850.7560.834
600.0190.2890.9210.59
700.0190.3150.6440.57
800.0190.2910.9350.61
900.7870.2580.6530.626
1000.7870.2780.3350.691
1100.1820.1790.9290.695
1200.0210.2850.30.801
1300.0210.2840.2650.811
1410.6120.3730.9860.611
1500.0860.3780.970.747
1600.0860.6390.8750.751
1700.1820.160.0520.658
1800.1820.33610.796
1900.9130.4250.0950.371
2000.1840.3310.8510.712
2110.9870.44510.579
2200.0210.3140.9440.702
2310.9870.44510.598
2400.6920.4030.750.742
2500.8770.36710.319
2600.6780.3870.8490.701
2700.9630.43610.66
2800.1030.530.8530.92
2900.9570.48110.986
3000.110.680.8430.901
3100.110.6810.5520.869
3200.8020.3990.8480.995
3310.7870.47310.699
3400.8020.3890.9130.989
3500.0160.3390.9640.91
3600.0160.3410.3760.929
3700.110.6880.9620.907
3800.8020.3760.9580.973
3900.6510.3710.0170.867
4000.8020.3740.980.979
4100.020.3450.9120.924
4200.110.6830.2980.897
4300.020.3470.670.948
4400.7870.2390.8010.749
4500.8020.3890.1540.982
4600.7870.2310.7130.76
4700.8020.3870.1310.977
Table 4. Weight coefficients of the influencing factors.
Table 4. Weight coefficients of the influencing factors.
Time Interval n w n s n a n k
Peak hours0.1430.2090.5560.092
Off-peak hours0.1240.1550.6390.082
Table 5. Driving path probability among different road segments.
Table 5. Driving path probability among different road segments.
Road SegmentTraditional BNsProposed Method
01100.00%100.00%
0252.97%57.10%
0347.03%42.89%
0470.02%100.00%
4326.31%31.00%
1448.54%100.00%
1533.92%40.28%
2460.57%38.50%
3625.86%21.20%
2245.73%48.40%
1654.27%51.59%
1875.90%100.00%
1922.76%17.17%
2026.22%35.54%
2151.02%100.00%
2371.15%100.00%
2533.01%52.43%
2633.67%47.56%
2756.29%55.26%
2843.71%44.73%
3127.07%44.02%
2941.92%55.47%
3031.01%44.52%
33100%100.00%
Disclaimer/Publisher’s Note: The statements, opinions and data contained in all publications are solely those of the individual author(s) and contributor(s) and not of MDPI and/or the editor(s). MDPI and/or the editor(s) disclaim responsibility for any injury to people or property resulting from any ideas, methods, instructions or products referred to in the content.

Share and Cite

MDPI and ACS Style

Bian, Y.; Liu, J.; Su, X.; Tang, Y. Construction of a Driving Route Inference Model Integrating Road Network Topology and Traffic Dynamics. ISPRS Int. J. Geo-Inf. 2026, 15, 84. https://doi.org/10.3390/ijgi15020084

AMA Style

Bian Y, Liu J, Su X, Tang Y. Construction of a Driving Route Inference Model Integrating Road Network Topology and Traffic Dynamics. ISPRS International Journal of Geo-Information. 2026; 15(2):84. https://doi.org/10.3390/ijgi15020084

Chicago/Turabian Style

Bian, Yuxia, Jinbao Liu, Xiaolong Su, and Yuanjie Tang. 2026. "Construction of a Driving Route Inference Model Integrating Road Network Topology and Traffic Dynamics" ISPRS International Journal of Geo-Information 15, no. 2: 84. https://doi.org/10.3390/ijgi15020084

APA Style

Bian, Y., Liu, J., Su, X., & Tang, Y. (2026). Construction of a Driving Route Inference Model Integrating Road Network Topology and Traffic Dynamics. ISPRS International Journal of Geo-Information, 15(2), 84. https://doi.org/10.3390/ijgi15020084

Note that from the first issue of 2016, this journal uses article numbers instead of page numbers. See further details here.

Article Metrics

Back to TopTop