Sign in to use this feature.

Years

Between: -

Subjects

remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline

Journals

Article Types

Countries / Regions

Search Results (63)

Search Parameters:
Keywords = highway scenes

Order results
Result details
Results per page
Select all
Export citation of selected articles as:
21 pages, 7741 KB  
Article
Polarization-Guided Deep Fusion for Real-Time Enhancement of Day–Night Tunnel Traffic Scenes: Dataset, Algorithm, and Network
by Renhao Rao, Changcai Cui, Liang Chen, Zhizhao Ouyang and Shuang Chen
Photonics 2025, 12(12), 1206; https://doi.org/10.3390/photonics12121206 - 8 Dec 2025
Viewed by 292
Abstract
The abrupt light-to-dark or dark-to-light transitions at tunnel entrances and exits cause short-term, large-scale illumination changes, leading traditional RGB perception to suffer from exposure mutations, glare, and noise accumulation at critical moments, thereby triggering perception failures and blind zones. Addressing this typical failure [...] Read more.
The abrupt light-to-dark or dark-to-light transitions at tunnel entrances and exits cause short-term, large-scale illumination changes, leading traditional RGB perception to suffer from exposure mutations, glare, and noise accumulation at critical moments, thereby triggering perception failures and blind zones. Addressing this typical failure scenario, this paper proposes a closed-loop enhancement solution centered on polarization imaging as a core physical prior, comprising a real-world polarimetric road dataset, a polarimetric physics-enhanced algorithm, and a beyond-fusion network, while satisfying both perception enhancement and real-time constraints. First, we construct the POLAR-GLV dataset, which is captured using a four-angle polarization camera under real highway tunnel conditions, covering the entire process of entering tunnels, inside tunnels, and exiting tunnels, systematically collecting data on adverse illumination and failure distributions in day–night traffic scenes. Second, we propose the Polarimetric Physical Enhancement with Adaptive Modulation (PPEAM) method, which uses Stokes parameters, DoLP, and AoLP as constraints. Leveraging the glare sensitivity of DoLP and richer texture information, it adaptively performs dark region enhancement and glare suppression according to scene brightness and dark region ratio, providing real-time polarization-based image enhancement. Finally, we design the Polar-PENet beyond-fusion network, which introduces Polarization-Aware Gates (PAG) and CBAM on top of physical priors, coupled with detection-driven perception-oriented loss and a beyond mechanism to explicitly fuse physics and deep semantics to surpass physical limitations. Experimental results show that compared to original images, Polar-PENet (beyond-fusion network) achieves PSNR and SSIM scores of 19.37 and 0.5487, respectively, on image quality metrics, surpassing the performance of PPEAM (polarimetric physics-enhanced algorithm) which scores 18.89 and 0.5257. In terms of downstream object detection performance, Polar-PENet performs exceptionally well in areas with drastic illumination changes such as tunnel entrances and exits, achieving a mAP of 63.7%, representing a 99.7% improvement over original images and a 12.1% performance boost over PPEAM’s 56.8%. In terms of processing speed, Polar-PENet is 2.85 times faster than the physics-enhanced algorithm PPEAM, with an inference speed of 183.45 frames per second, meeting the real-time requirements of autonomous driving and laying a solid foundation for practical deployment in edge computing environments. The research validates the effective paradigm of using polarimetric physics as a prior and surpassing physics through learning methods. Full article
(This article belongs to the Special Issue Computational Optical Imaging: Theories, Algorithms, and Applications)
Show Figures

Figure 1

19 pages, 4107 KB  
Article
Structured Prompting and Collaborative Multi-Agent Knowledge Distillation for Traffic Video Interpretation and Risk Inference
by Yunxiang Yang, Ningning Xu and Jidong J. Yang
Computers 2025, 14(11), 490; https://doi.org/10.3390/computers14110490 - 9 Nov 2025
Viewed by 1076
Abstract
Comprehensive highway scene understanding and robust traffic risk inference are vital for advancing Intelligent Transportation Systems (ITS) and autonomous driving. Traditional approaches often struggle with scalability and generalization, particularly under the complex and dynamic conditions of real-world environments. To address these challenges, we [...] Read more.
Comprehensive highway scene understanding and robust traffic risk inference are vital for advancing Intelligent Transportation Systems (ITS) and autonomous driving. Traditional approaches often struggle with scalability and generalization, particularly under the complex and dynamic conditions of real-world environments. To address these challenges, we introduce a novel structured prompting and multi-agent collaborative knowledge distillation framework that enables automatic generation of high-quality traffic scene annotations and contextual risk assessments. Our framework orchestrates two large vision–language models (VLMs): GPT-4o and o3-mini, using a structured Chain-of-Thought (CoT) strategy to produce rich, multiperspective outputs. These outputs serve as knowledge-enriched pseudo-annotations for supervised fine-tuning of a much smaller student VLM. The resulting compact 3B-scale model, named VISTA (Vision for Intelligent Scene and Traffic Analysis), is capable of understanding low-resolution traffic videos and generating semantically faithful, risk-aware captions. Despite its significantly reduced parameter count, VISTA achieves strong performance across established captioning metrics (BLEU-4, METEOR, ROUGE-L, and CIDEr) when benchmarked against its teacher models. This demonstrates that effective knowledge distillation and structured role-aware supervision can empower lightweight VLMs to capture complex reasoning capabilities. The compact architecture of VISTA facilitates efficient deployment on edge devices, enabling real-time risk monitoring without requiring extensive infrastructure upgrades. Full article
Show Figures

Figure 1

24 pages, 8077 KB  
Article
A Cooperative Car-Following Eco-Driving Strategy for a Plug-In Hybrid Electric Vehicle Platoon in the Connected Environment
by Zhenwei Lv, Tinglin Chen, Junyan Han, Kai Feng, Cheng Shen, Xiaoyuan Wang, Jingheng Wang, Quanzheng Wang, Longfei Chen, Han Zhang and Yuhan Jiang
Vehicles 2025, 7(4), 111; https://doi.org/10.3390/vehicles7040111 - 1 Oct 2025
Viewed by 663
Abstract
The development of the Connected and Autonomous Vehicle (CAV) and Hybrid Electric Vehicle (HEV) provides a new effective means for the optimization of eco-driving strategies. However, the existing research has not effectively considered the cooperative speed optimization and power allocation problem of the [...] Read more.
The development of the Connected and Autonomous Vehicle (CAV) and Hybrid Electric Vehicle (HEV) provides a new effective means for the optimization of eco-driving strategies. However, the existing research has not effectively considered the cooperative speed optimization and power allocation problem of the Connected and Autonomous Plug-in Hybrid Electric Vehicle (CAPHEV) platoon. To this end, a hierarchical eco-driving strategy is proposed, which aims to enhance driving efficiency and fuel economy while ensuring the safety and comfort of the platoon. Firstly, an improved car-following model is proposed, which considers the motion states of multiple preceding vehicles. On this basis, a platoon cooperative car-following decision-making method based on model predictive control is designed. Secondly, a distributed energy management strategy is constructed, and a bionic optimization algorithm based on the behavior of nutcrackers is introduced to solve nonlinear problems, so as to solve the energy distribution and management problems of powertrain systems. Finally, the tests are conducted under the driving cycle of the Urban Dynamometer Driving Schedule (UDDS) and the Highway Fuel Economy Test (HWFET). The results show that the proposed strategy can ensure the driving safety of the CAPHEV platoon in different scenes, and has excellent tracking accuracy and driving comfort. Compared with the rule-based strategy, the equivalent energy consumption of UDDS and HWFET is reduced by 20.7% and 5.5% in the battery’s healthy charging range, respectively. Full article
Show Figures

Figure 1

24 pages, 5065 KB  
Article
Benchmark Dataset and Deep Model for Monocular Camera Calibration from Single Highway Images
by Wentao Zhang, Wei Jia and Wei Li
Sensors 2025, 25(18), 5815; https://doi.org/10.3390/s25185815 - 18 Sep 2025
Viewed by 914
Abstract
Single-image based camera auto-calibration holds significant value for improving perception efficiency in traffic surveillance systems. However, existing approaches face dual challenges: scarcity of real-world datasets and poor adaptability to multi-view scenarios. This paper presents a systematic solution framework. First, we constructed a large-scale [...] Read more.
Single-image based camera auto-calibration holds significant value for improving perception efficiency in traffic surveillance systems. However, existing approaches face dual challenges: scarcity of real-world datasets and poor adaptability to multi-view scenarios. This paper presents a systematic solution framework. First, we constructed a large-scale synthetic dataset containing 36 highway scenarios using the CARLA 0.9.15 simulation engine, generating approximately 336,000 virtual frames with precise calibration parameters. The dataset achieves statistical consistency with real-world scenes by incorporating diverse view distributions, complex weather conditions, and varied road geometries. Second, we developed DeepCalib, a deep calibration network that explicitly models perspective projection features through the triplet attention mechanism. This network simultaneously achieves road direction vanishing point localization and camera pose estimation using only a single image. Finally, we adopted a progressive learning paradigm: robust pre-training on synthetic data establishes universal feature representations in the first stage, followed by fine-tuning on real-world datasets in the second stage to enhance practical adaptability. Experimental results indicate that DeepCalib attains an average calibration precision of 89.6%. Compared to conventional multi-stage algorithms, our method achieves a single-frame processing speed of 10 frames per second, showing robust adaptability to dynamic calibration tasks across diverse surveillance views. Full article
Show Figures

Figure 1

28 pages, 6018 KB  
Article
Analysis of Factors Influencing Driving Safety at Typical Curve Sections of Tibet Plateau Mountainous Areas Based on Explainability-Oriented Dynamic Ensemble Learning Strategy
by Xinhang Wu, Fei Chen, Wu Bo, Yicheng Shuai, Xue Zhang, Wa Da, Huijing Liu and Junhao Chen
Sustainability 2025, 17(17), 7820; https://doi.org/10.3390/su17177820 - 30 Aug 2025
Cited by 1 | Viewed by 1070
Abstract
The complex topography of China’s Tibetan Plateau mountainous roads, characterized by diverse curve types and frequent traffic accidents, significantly impacts the safety and sustainability of the transportation system. To enhance driving safety on these mountain roads and promote low-carbon, resilient transportation development, this [...] Read more.
The complex topography of China’s Tibetan Plateau mountainous roads, characterized by diverse curve types and frequent traffic accidents, significantly impacts the safety and sustainability of the transportation system. To enhance driving safety on these mountain roads and promote low-carbon, resilient transportation development, this study investigates the mechanisms through which different curve types affect driving safety and proposes optimization strategies based on interpretable machine learning methods. Focusing on three typical curve types in plateau regions, drone high-altitude photography was employed to capture footage of three specific curves along China’s National Highway G318. Oblique photography was utilized to acquire road environment information, from which 11 data indicators were extracted. Subsequently, 8 indicators, including cornering preference and vehicle type, were designated as explanatory variables, the curve type indicator was set as the dependent variable, and the remaining indicators were established as safety assessment indicators. Linear models (logistic regression, ridge regression) and non-linear models (Random Forest, LightGBM, XGBoost) were used to conduct model comparison and factor analysis. Ultimately, three non-linear models were selected, employing an explainability-oriented dynamic ensemble learning strategy (X-DEL) to evaluate the three curve types. The results indicate that non-linear models outperform linear models in terms of accuracy and scene adaptability. The explainability-oriented dynamic ensemble learning strategy (X-DEL) is beneficial for the construction of driving safety models and factor analysis on Tibetan Plateau mountainous roads. Furthermore, the contribution of indicators to driving safety varies across different curve types. This research not only deepens the scientific understanding of safety issues on plateau mountainous roads but, more importantly, its proposed solutions directly contribute to building safer, more efficient, and environmentally friendly transportation systems, thereby providing crucial impetus for sustainable transportation and high-quality regional development in the Tibetan Plateau. Full article
Show Figures

Figure 1

19 pages, 9284 KB  
Article
UAV-YOLO12: A Multi-Scale Road Segmentation Model for UAV Remote Sensing Imagery
by Bingyan Cui, Zhen Liu and Qifeng Yang
Drones 2025, 9(8), 533; https://doi.org/10.3390/drones9080533 - 29 Jul 2025
Cited by 1 | Viewed by 2602
Abstract
Unmanned aerial vehicles (UAVs) are increasingly used for road infrastructure inspection and monitoring. However, challenges such as scale variation, complex background interference, and the scarcity of annotated UAV datasets limit the performance of traditional segmentation models. To address these challenges, this study proposes [...] Read more.
Unmanned aerial vehicles (UAVs) are increasingly used for road infrastructure inspection and monitoring. However, challenges such as scale variation, complex background interference, and the scarcity of annotated UAV datasets limit the performance of traditional segmentation models. To address these challenges, this study proposes UAV-YOLOv12, a multi-scale segmentation model specifically designed for UAV-based road imagery analysis. The proposed model builds on the YOLOv12 architecture by adding two key modules. It uses a Selective Kernel Network (SKNet) to adjust receptive fields dynamically and a Partial Convolution (PConv) module to improve spatial focus and robustness in occluded regions. These enhancements help the model better detect small and irregular road features in complex aerial scenes. Experimental results on a custom UAV dataset collected from national highways in Wuxi, China, show that UAV-YOLOv12 achieves F1-scores of 0.902 for highways (road-H) and 0.825 for paths (road-P), outperforming the original YOLOv12 by 5% and 3.2%, respectively. Inference speed is maintained at 11.1 ms per image, supporting near real-time performance. Moreover, comparative evaluations with U-Net show that UAV-YOLOv12 improves by 7.1% and 9.5%. The model also exhibits strong generalization ability, achieving F1-scores above 0.87 on public datasets such as VHR-10 and the Drone Vehicle dataset. These results demonstrate that the proposed UAV-YOLOv12 can achieve high accuracy and robustness in diverse road environments and object scales. Full article
Show Figures

Figure 1

22 pages, 7106 KB  
Article
Enhancing Highway Scene Understanding: A Novel Data Augmentation Approach for Vehicle-Mounted LiDAR Point Cloud Segmentation
by Dalong Zhou, Yuanyang Yi, Yu Wang, Zhenfeng Shao, Yanjun Hao, Yuyan Yan, Xiaojin Zhao and Junkai Guo
Remote Sens. 2025, 17(13), 2147; https://doi.org/10.3390/rs17132147 - 23 Jun 2025
Viewed by 963
Abstract
The intelligent extraction of highway assets is pivotal for advancing transportation infrastructure and autonomous systems, yet traditional methods relying on manual inspection or 2D imaging struggle with sparse, occluded environments, and class imbalance. This study proposes an enhanced MinkUNet-based framework to address data [...] Read more.
The intelligent extraction of highway assets is pivotal for advancing transportation infrastructure and autonomous systems, yet traditional methods relying on manual inspection or 2D imaging struggle with sparse, occluded environments, and class imbalance. This study proposes an enhanced MinkUNet-based framework to address data scarcity, occlusion, and imbalance in highway point cloud segmentation. A large-scale dataset (PEA-PC Dataset) was constructed, covering six key asset categories, addressing the lack of specialized highway datasets. A hybrid conical masking augmentation strategy was designed to simulate natural occlusions and enhance local feature retention, while semi-supervised learning prioritized foreground differentiation. The experimental results showed that the overall mIoU reached 73.8%, with the IoU of bridge railings and emergency obstacles exceeding 95%. The IoU of columnar assets increased from 2.6% to 29.4% through occlusion perception enhancement, demonstrating the effectiveness of this method in improving object recognition accuracy. The framework balances computational efficiency and robustness, offering a scalable solution for sparse highway scenes. However, challenges remain in segmenting vegetation-occluded pole-like assets due to partial data loss. This work highlights the efficacy of tailored augmentation and semi-supervised strategies in refining 3D segmentation, advancing applications in intelligent transportation and digital infrastructure. Full article
Show Figures

Figure 1

24 pages, 6003 KB  
Article
ADSAP: An Adaptive Speed-Aware Trajectory Prediction Framework with Adversarial Knowledge Transfer
by Cheng Da, Yongsheng Qian, Junwei Zeng, Xuting Wei and Futao Zhang
Electronics 2025, 14(12), 2448; https://doi.org/10.3390/electronics14122448 - 16 Jun 2025
Viewed by 766
Abstract
Accurate trajectory prediction of surrounding vehicles is a fundamental challenge in autonomous driving, requiring sophisticated modeling of complex vehicle interactions, traffic dynamics, and contextual dependencies. This paper introduces Adaptive Speed-Aware Prediction (ADSAP), a novel trajectory prediction framework that advances the state of the [...] Read more.
Accurate trajectory prediction of surrounding vehicles is a fundamental challenge in autonomous driving, requiring sophisticated modeling of complex vehicle interactions, traffic dynamics, and contextual dependencies. This paper introduces Adaptive Speed-Aware Prediction (ADSAP), a novel trajectory prediction framework that advances the state of the art through innovative mechanisms for adaptive attention modulation and knowledge transfer. At its core, ADSAP employs an adaptive deformable speed-aware pooling mechanism that dynamically adjusts the model’s attention distribution and receptive field based on instantaneous vehicle states and interaction patterns. This adaptive architecture enables fine-grained modeling of diverse traffic scenarios, from sparse highway conditions to dense urban environments. The framework incorporates a sophisticated speed-aware multi-scale feature aggregation module that systematically combines spatial and temporal information across multiple scales, facilitating comprehensive scene understanding and robust trajectory prediction. To bridge the gap between model complexity and computational efficiency, we propose an adversarial knowledge distillation approach that effectively transfers learned representations and decision-making strategies from a high-capacity teacher model to a lightweight student model. This novel distillation mechanism preserves prediction accuracy while significantly reducing computational overhead, making the framework suitable for real-world deployment. Extensive empirical evaluation on the large-scale NGSIM and highD naturalistic driving datasets demonstrates ADSAP’s superior performance. The ADSAP framework achieves an 18.7% reduction in average displacement error and a 22.4% improvement in final displacement error compared to state-of-the-art methods while maintaining consistent performance across varying traffic densities (0.05–0.85 vehicles/meter) and speed ranges (0–35 m/s). Moreover, ADSAP exhibits robust generalization capabilities across different driving scenarios and weather conditions, with the lightweight student model achieving 95% of the teacher model’s accuracy while offering a 3.2× reduction in inference time. Comprehensive experimental results supported by detailed ablation studies and statistical analyses validate ADSAP’s effectiveness in addressing the trajectory prediction challenge. Our framework provides a novel perspective on integrating adaptive attention mechanisms with efficient knowledge transfer, contributing to the development of more reliable and intelligent autonomous driving systems. Significant improvements in prediction accuracy, computational efficiency, and generalization capability demonstrate ADSAP’s potential ability to advance autonomous driving technology. Full article
(This article belongs to the Special Issue Advances in AI Engineering: Exploring Machine Learning Applications)
Show Figures

Figure 1

21 pages, 2352 KB  
Article
Weak-Cue Mixed Similarity Matrix and Boundary Expansion Clustering for Multi-Target Multi-Camera Tracking Systems in Highway Scenarios
by Sixian Chan, Shenghao Ni, Zheng Wang, Yuan Yao, Jie Hu, Xiaoxiang Chen and Suqiang Li
Electronics 2025, 14(9), 1896; https://doi.org/10.3390/electronics14091896 - 7 May 2025
Viewed by 723
Abstract
In highway scenarios, factors such as high-speed vehicle movement, lighting conditions, and positional changes significantly affect the quality of trajectories in multi-object tracking. This, in turn, impacts the trajectory clustering process within the multi-target multi-camera tracking (MTMCT) system. To address this challenge, we [...] Read more.
In highway scenarios, factors such as high-speed vehicle movement, lighting conditions, and positional changes significantly affect the quality of trajectories in multi-object tracking. This, in turn, impacts the trajectory clustering process within the multi-target multi-camera tracking (MTMCT) system. To address this challenge, we present the weak-cue mixed similarity matrix and boundary expansion clustering (WCBE) MTMCT system. First, the weak-cue mixed similarity matrix (WCMSM) enhances the original trajectory features by incorporating weak cues. Then, considering the practical scene and incorporating richer information, the boundary expansion clustering (BEC) algorithm improves trajectory clustering performance by taking the distribution of trajectory observation points into account. Finally, to validate the effectiveness of our proposed method, we conduct experiments on both the Highway Surveillance Traffic (HST) dataset developed by our team and the public CityFlow dataset. The results demonstrate promising outcomes, validating the efficacy of our approach. Full article
(This article belongs to the Special Issue Deep Learning-Based Scene Text Detection)
Show Figures

Figure 1

23 pages, 6015 KB  
Article
FIRE-YOLOv8s: A Lightweight and Efficient Algorithm for Tunnel Fire Detection
by Lingyu Bu, Wenfeng Li, Hongmin Zhang, Hong Wang, Qianqian Tian and Yunteng Zhou
Fire 2025, 8(4), 125; https://doi.org/10.3390/fire8040125 - 24 Mar 2025
Viewed by 1391
Abstract
To address the challenges of high algorithmic complexity and low accuracy in current fire detection algorithms for highway tunnel scenarios, this paper proposes a lightweight tunnel fire detection algorithm, FIRE-YOLOv8s. First, a novel feature extraction module, P-C2f, is designed using partial convolution (PConv). [...] Read more.
To address the challenges of high algorithmic complexity and low accuracy in current fire detection algorithms for highway tunnel scenarios, this paper proposes a lightweight tunnel fire detection algorithm, FIRE-YOLOv8s. First, a novel feature extraction module, P-C2f, is designed using partial convolution (PConv). By dynamically determining the convolution kernel’s range of action, the module significantly reduces the model’s computational load and parameter count. Additionally, the ADown module is introduced for downsampling, employing a lightweight and branching design to minimize computational requirements while preserving essential feature information. Secondly, the neck feature fusion network is redesigned using a lightweight CNN-based cross-scale fusion module (CCFF). This module leverages lightweight convolution operations to achieve efficient cross-scale feature fusion, further reducing model complexity and enhancing the fusion efficiency of multi-scale features. Finally, the dynamic head detection head is introduced, incorporating multiple self-attention mechanisms to better capture key information in complex scenes. This improvement enhances the model’s accuracy and robustness in detecting fire targets under challenging conditions. Experimental results on the self-constructed tunnel fire dataset demonstrate that, compared to the baseline model YOLOv8s, FIRE-YOLOv8s reduces the computational load by 47.2%, decreases the number of parameters by 52.2%, and reduces the model size to 50% of the original, while achieving a 4.8% improvement in accuracy and a 1.7% increase in mAP@0.5. Furthermore, deployment experiments on a tunnel emergency firefighting robot platform validate the algorithm’s practical applicability, confirming its effectiveness in real-world scenarios. Full article
Show Figures

Figure 1

19 pages, 30513 KB  
Article
From Detection to Action: A Multimodal AI Framework for Traffic Incident Response
by Afaq Ahmed, Muhammad Farhan, Hassan Eesaar, Kil To Chong and Hilal Tayara
Drones 2024, 8(12), 741; https://doi.org/10.3390/drones8120741 - 9 Dec 2024
Cited by 15 | Viewed by 5948
Abstract
With the rising incidence of traffic accidents and growing environmental concerns, the demand for advanced systems to ensure traffic and environmental safety has become increasingly urgent. This paper introduces an automated highway safety management framework that integrates computer vision and natural language processing [...] Read more.
With the rising incidence of traffic accidents and growing environmental concerns, the demand for advanced systems to ensure traffic and environmental safety has become increasingly urgent. This paper introduces an automated highway safety management framework that integrates computer vision and natural language processing for real-time monitoring, analysis, and reporting of traffic incidents. The system not only identifies accidents but also aids in coordinating emergency responses, such as dispatching ambulances, fire services, and police, while simultaneously managing traffic flow. The approach begins with the creation of a diverse highway accident dataset, combining public datasets with drone and CCTV footage. YOLOv11s is retrained on this dataset to enable real-time detection of critical traffic elements and anomalies, such as collisions and fires. A vision–language model (VLM), Moondream2, is employed to generate detailed scene descriptions, which are further refined by a large language model (LLM), GPT 4-Turbo, to produce concise incident reports and actionable suggestions. These reports are automatically sent to relevant authorities, ensuring prompt and effective response. The system’s effectiveness is validated through the analysis of diverse accident videos and zero-shot simulation testing within the Webots environment. The results highlight the potential of combining drone and CCTV imagery with AI-driven methodologies to improve traffic management and enhance public safety. Future work will include refining detection models, expanding dataset diversity, and deploying the framework in real-world scenarios using live drone and CCTV feeds. This study lays the groundwork for scalable and reliable solutions to address critical traffic safety challenges. Full article
Show Figures

Figure 1

19 pages, 4032 KB  
Article
An Algorithm for Predicting Vehicle Behavior in High-Speed Scenes Using Visual and Dynamic Graphical Neural Network Inference
by Menghao Li, Miao Liu, Weiwei Zhang, Wenfeng Guo, Enqing Chen, Chunguang Hu and Maomao Zhang
Appl. Sci. 2024, 14(19), 8873; https://doi.org/10.3390/app14198873 - 2 Oct 2024
Cited by 4 | Viewed by 1774
Abstract
Accidents caused by vehicles changing lanes occur frequently on highways. Moreover, frequent lane changes can severely impact traffic flow during peak commuting hours and on busy roads. A novel framework based on a multi-relational graph convolutional network (MR-GCN) is herein proposed to address [...] Read more.
Accidents caused by vehicles changing lanes occur frequently on highways. Moreover, frequent lane changes can severely impact traffic flow during peak commuting hours and on busy roads. A novel framework based on a multi-relational graph convolutional network (MR-GCN) is herein proposed to address these challenges. First, a dynamic multilevel relational graph was designed to describe interactions between vehicles and road objects at different spatio-temporal granularities, with real-time updates to edge weights to enhance understanding of complex traffic scenarios. Second, an improved spatio-temporal interaction graph generation method was introduced, focusing on spatio-temporal variations and capturing complex interaction patterns to enhance prediction accuracy and adaptability. Finally, by integrating a dynamic multi-relational graph convolutional network (DMR-GCN) with dynamic scene sensing and interaction learning mechanisms, the framework enables real-time updates of complex vehicle relationships, thereby improving behavior prediction’s accuracy and real-time performance. Experimental validation on multiple benchmark datasets, including KITTI, Apollo, and Indian, showed that our algorithmic framework achieves significant performance improvements in vehicle behavior prediction tasks, with Map, Recall, and F1 scores reaching 90%, 88%, and 89%, respectively, outperforming existing algorithms. Additionally, the model achieved a Map of 91%, a Recall of 89%, and an F1 score of 90% under congested road conditions in a self-collected high-speed traffic scenario dataset, further demonstrating its robustness and adaptability in high-speed traffic conditions. These results show that the proposed model is highly practical and stable in real-world applications such as traffic control systems and self-driving vehicles, providing strong support for efficient vehicle behavior prediction. Full article
Show Figures

Figure 1

29 pages, 7000 KB  
Article
Research on Vehicle-Road Intelligent Capacity Redistribution and Cost Sharing in the Context of Collaborative Intelligence
by Guangyu Zhu, Fuquan Zhao, Haokun Song, Wang Zhang and Zongwei Liu
Appl. Sci. 2024, 14(16), 7286; https://doi.org/10.3390/app14167286 - 19 Aug 2024
Cited by 2 | Viewed by 2049
Abstract
The vehicle-road collaborative intelligence approach has become an industry consensus. It can efficiently tackle the technical hurdles and reduce the performance requirements and costs of on-board perception and computing devices. There is a need for in-depth quantitative studies to optimize the allocation of [...] Read more.
The vehicle-road collaborative intelligence approach has become an industry consensus. It can efficiently tackle the technical hurdles and reduce the performance requirements and costs of on-board perception and computing devices. There is a need for in-depth quantitative studies to optimize the allocation of vehicle-road intelligent capabilities for collaborative intelligence. However, current research tends to focus more on qualitative analysis, and there is little research on the redistribution of vehicle and roadside intelligent capabilities. In this paper, we present a model for distributing perception and computing capabilities between vehicle-side and roadside, ensuring to meet the needs of various autonomous driving levels. Meanwhile, the collaborative intelligence approach will also introduce the costs of intelligent infrastructure deployment, energy, and maintenance. Different roads have varying scene characteristics and usage intensities. It is necessary to conduct a cost-effectiveness analysis of the intelligent deployment of different road types. A vehicle-road cost allocation model is developed based on the lifecycle traveled distance of vehicles and the lifecycle traffic flow of various roads to evaluate the function-cost effectiveness. Our study presents several vehicle-road intelligent schemes that meet the needs of various autonomous driving levels and selects Beijing for case analysis. The results indicate that primary intelligent infrastructure can reduce the lifecycle cost of the vehicle-side intelligent scheme for intermediate autonomous driving from ¥65,301 to ¥37,703, and advanced intelligent infrastructure can reduce the lifecycle cost for advanced autonomous driving from ¥126,938 to ¥42,180. Considering the distributed cost of vehicle-side and roadside, urban roads in Beijing have higher function-cost effectiveness compared to highways, especially urban expressways, which are expected to generate 43.3 times the vehicle-function-cost benefits after the advanced intelligent upgrades. The corresponding research findings can serve as a reference for city managers to make decisions on intelligent road deployment. Full article
Show Figures

Figure 1

18 pages, 4094 KB  
Article
Proposing an Efficient Deep Learning Algorithm Based on Segment Anything Model for Detection and Tracking of Vehicles through Uncalibrated Urban Traffic Surveillance Cameras
by Danesh Shokri, Christian Larouche and Saeid Homayouni
Electronics 2024, 13(14), 2883; https://doi.org/10.3390/electronics13142883 - 22 Jul 2024
Cited by 12 | Viewed by 3162
Abstract
In this study, we present a novel approach leveraging the segment anything model (SAM) for the efficient detection and tracking of vehicles in urban traffic surveillance systems by utilizing uncalibrated low-resolution highway cameras. This research addresses the critical need for accurate vehicle monitoring [...] Read more.
In this study, we present a novel approach leveraging the segment anything model (SAM) for the efficient detection and tracking of vehicles in urban traffic surveillance systems by utilizing uncalibrated low-resolution highway cameras. This research addresses the critical need for accurate vehicle monitoring in intelligent transportation systems (ITS) and smart city infrastructure. Traditional methods often struggle with the variability and complexity of urban environments, leading to suboptimal performance. Our approach harnesses the power of SAM, an advanced deep learning-based image segmentation algorithm, to significantly enhance the detection accuracy and tracking robustness. Through extensive testing and evaluation on two datasets of 511 highway cameras from Quebec, Canada and NVIDIA AI City Challenge Track 1, our algorithm achieved exceptional performance metrics including a precision of 89.68%, a recall of 97.87%, and an F1-score of 93.60%. These results represent a substantial improvement over existing state-of-the-art methods such as the YOLO version 8 algorithm, single shot detector (SSD), region-based convolutional neural network (RCNN). This advancement not only highlights the potential of SAM in real-time vehicle detection and tracking applications, but also underscores its capability to handle the diverse and dynamic conditions of urban traffic scenes. The implementation of this technology can lead to improved traffic management, reduced congestion, and enhanced urban mobility, making it a valuable tool for modern smart cities. The outcomes of this research pave the way for future advancements in remote sensing and photogrammetry, particularly in the realm of urban traffic surveillance and management. Full article
(This article belongs to the Special Issue Vehicle Technologies for Sustainable Smart Cities and Societies)
Show Figures

Figure 1

24 pages, 7522 KB  
Article
A Novel Robust H Control Approach Based on Vehicle Lateral Dynamics for Practical Path Tracking Applications
by Jie Wang, Baichao Wang, Congzhi Liu, Litong Zhang and Liang Li
World Electr. Veh. J. 2024, 15(7), 293; https://doi.org/10.3390/wevj15070293 - 30 Jun 2024
Cited by 2 | Viewed by 2452
Abstract
This paper proposes a robust lateral control scheme for the path tracking of autonomous vehicles. Considering the discrepancies between the model parameters and the actual values of the vehicle and the fluctuation of parameters during driving, the norm-bounded uncertainty is utilized to deal [...] Read more.
This paper proposes a robust lateral control scheme for the path tracking of autonomous vehicles. Considering the discrepancies between the model parameters and the actual values of the vehicle and the fluctuation of parameters during driving, the norm-bounded uncertainty is utilized to deal with the uncertainty of model parameters. Because some state variables in the model are difficult to measure, an H observer is designed to estimate state variables and provide accurate state information to improve the robustness of path tracking. An H state feedback controller is proposed to suppress system nonlinearity and uncertainty and produce the desired steering wheel angle to solve the path tracking problem. A feedforward control is designed to deal with road curvature and further reduce tracking errors. In summary, a path tracking method with H performance is established based on the linear matrix inequality (LMI) technique, and the gains in observer and controller can be obtained directly. The hardware-in-the-loop (HIL) test is built to validate the real-time processing performance of the proposed method to ensure excellent practical application potential, and the effectiveness of the proposed control method is validated through the utilization of urban road and highway scenes. The experimental results indicate that the suggested control approach can track the desired trajectory more precisely compared with the model predictive control (MPC) method and make tracking errors within a small range in both urban and highway scenarios. Full article
(This article belongs to the Special Issue Dynamics, Control and Simulation of Electrified Vehicles)
Show Figures

Figure 1

Back to TopTop