Sign in to use this feature.

Years

Between: -

Subjects

remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline

Journals

remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline

Article Types

Countries / Regions

remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline

Search Results (619)

Search Parameters:
Keywords = road object detection

Order results
Result details
Results per page
Select all
Export citation of selected articles as:
30 pages, 7812 KB  
Article
Drone-Based Road Marking Condition Mapping: A Drone Imaging and Geospatial Pipeline for Asset Management
by Minh Dinh Bui, Jubin Lee, Kanghyeok Choi, HyunSoo Kim and Changjae Kim
Drones 2026, 10(2), 77; https://doi.org/10.3390/drones10020077 (registering DOI) - 23 Jan 2026
Viewed by 30
Abstract
This study presents a drone-based method for assessing the condition of road markings from high-resolution imagery acquired by a UAV. A DJI Matrice 300 RTK (Real-Time Kinematic) equipped with a Zenmuse P1 camera (DJI, China) is flown over urban road corridors to capture [...] Read more.
This study presents a drone-based method for assessing the condition of road markings from high-resolution imagery acquired by a UAV. A DJI Matrice 300 RTK (Real-Time Kinematic) equipped with a Zenmuse P1 camera (DJI, China) is flown over urban road corridors to capture images with centimeter-level ground sampling distance. In contrast to common approaches that rely on vehicle-mounted or street-view cameras, using a UAV reduces survey time and deployment effort while still providing views that are suitable for marking. The flight altitude, overlap, and corridor pattern are chosen to limit occlusions from traffic and building shadows while preserving the resolution required for condition assessment. From these images, the method locates individual markings, assigns a class to each marking, and estimates its level of deterioration. Candidate markings are first detected with YOLOv9 on the UAV imagery. The detections are cropped and segmented, which refines marking boundaries and thin structures. The condition is then estimated at the pixel level by modeling gray-level statistics with kernel density estimation (KDE) and a two-component Gaussian mixture model (GMM) to separate intact and distressed material. Subsequently, we compute a per-instance damage ratio that summarizes the proportion of degraded pixels within each marking. All results are georeferenced to map coordinates using a 3D reference model, allowing visualization on base maps and integration into road asset inventories. Experiments on unseen urban areas report detection performance (precision, recall, mean average precision) and segmentation performance (intersection over union), and analyze the stability of the damage ratio and processing time. The findings indicate that the drone-based method can identify road markings, estimate their condition, and attach each record to geographic space in a way that is useful for inspection scheduling and maintenance planning. Full article
(This article belongs to the Special Issue Urban Traffic Monitoring and Analysis Using UAVs)
32 pages, 2129 KB  
Article
Artificial Intelligence-Based Depression Detection
by Gabor Kiss and Patrik Viktor
Sensors 2026, 26(2), 748; https://doi.org/10.3390/s26020748 (registering DOI) - 22 Jan 2026
Viewed by 46
Abstract
Decisions made by pilots and drivers suffering from depression can endanger the lives of hundreds of people, as demonstrated by the tragedies of Germanwings flight 9525 and Air India flight 171. Since the detection of depression is currently based largely on subjective self-reporting, [...] Read more.
Decisions made by pilots and drivers suffering from depression can endanger the lives of hundreds of people, as demonstrated by the tragedies of Germanwings flight 9525 and Air India flight 171. Since the detection of depression is currently based largely on subjective self-reporting, there is an urgent need for fast, objective, and reliable detection methods. In our study, we present an artificial intelligence-based system that combines iris-based identification with the analysis of pupillometric and eye movement biomarkers, enabling the real-time detection of physiological signs of depression before driving or flying. The two-module model was evaluated based on data from 242 participants: the iris identification module operated with an Equal Error Rate of less than 0.5%, while the depression-detecting CNN-LSTM network achieved 89% accuracy and an AUC value of 0.94. Compared to the neutral state, depressed individuals responded to negative news with significantly greater pupil dilation (+27.9% vs. +18.4%), while showing a reduced or minimal response to positive stimuli (−1.3% vs. +6.2%). This was complemented by slower saccadic movement and longer fixation time, which is consistent with the cognitive distortions characteristic of depression. Our results indicate that pupillometric deviations relative to individual baselines can be reliably detected and used with high accuracy for depression screening. The presented system offers a preventive safety solution that could reduce the number of accidents caused by human error related to depression in road and air traffic in the future. Full article
Show Figures

Figure 1

26 pages, 6864 KB  
Article
OCDBMamba: A Robust and Efficient Road Pothole Detection Framework with Omnidirectional Context and Consensus-Based Boundary Modeling
by Feng Ling, Yunfeng Lin, Weijie Mao and Lixing Tang
Sensors 2026, 26(2), 632; https://doi.org/10.3390/s26020632 - 17 Jan 2026
Viewed by 113
Abstract
Reliable road pothole detection remains challenging in complex environments, where low contrast, shadows, water films, and strong background textures cause frequent false alarms, missed detections, and boundary instability. Thin rims and adjacent objects further complicate localization, and model robustness often deteriorates across regions [...] Read more.
Reliable road pothole detection remains challenging in complex environments, where low contrast, shadows, water films, and strong background textures cause frequent false alarms, missed detections, and boundary instability. Thin rims and adjacent objects further complicate localization, and model robustness often deteriorates across regions and sensor domains. To address these issues, we propose OCDBMamba, a unified and efficient framework that integrates omnidirectional context modeling with consensus-driven boundary selection. Specifically, we introduce the following: (1) an Omnidirectional Channel-Selective Scanning (OCS) mechanism that aggregates long-range structural cues by performing multidirectional scans and channel similarity fusion with cross-directional consistency, capturing comprehensive spatial dependencies at near-linear complexity and (2) a Dual-Branch Consensus Thresholding (DBCT) module that enforces branch-level agreement with sparsity-regulated adaptive thresholds and boundary consistency constraints, effectively preserving true rims while suppressing reflections and redundant responses. Extensive experiments on normal, shadowed, wet, low-contrast, and texture-rich subsets yield 90.7% mAP50, 67.8% mAP50:95, a precision of 0.905, and a recall of 0.812 with 13.1 GFLOPs, outperforming YOLOv11n by 5.4% and 5.6%, respectively. The results demonstrate more stable localization and enhanced robustness under diverse conditions, validating the synergy of OCS and DBCT for practical road inspection and on-vehicle perception scenarios. Full article
(This article belongs to the Section Intelligent Sensors)
Show Figures

Figure 1

28 pages, 3390 KB  
Article
SDC-YOLOv8: An Improved Algorithm for Road Defect Detection Through Attention-Enhanced Feature Learning and Adaptive Feature Reconstruction
by Hao Yang, Yulong Song, Yue Liang, Enhao Tang and Danyang Cao
Sensors 2026, 26(2), 609; https://doi.org/10.3390/s26020609 - 16 Jan 2026
Viewed by 251
Abstract
Road defect detection is essential for timely road damage repair and traffic safety assurance. However, existing object detection algorithms suffer from insufficient accuracy in detecting small road surface defects and are prone to missed detections and false alarms under complex lighting and background [...] Read more.
Road defect detection is essential for timely road damage repair and traffic safety assurance. However, existing object detection algorithms suffer from insufficient accuracy in detecting small road surface defects and are prone to missed detections and false alarms under complex lighting and background conditions. To address these challenges, this study proposes SDC-YOLOv8, an improved YOLOv8-based algorithm for road defect detection that employs attention-enhanced feature learning and adaptive feature reconstruction. The model incorporates three key innovations: (1) an SPPF-LSKA module that integrates Fast Spatial Pyramid Pooling with Large Separable Kernel Attention to enhance multi-scale feature representation and irregular defect modeling capabilities; (2) DySample dynamic upsampling that replaces conventional interpolation methods for adaptive feature reconstruction with reduced computational cost; and (3) a Coordinate Attention module strategically inserted to improve spatial localization accuracy under complex conditions. Comprehensive experiments on a public pothole dataset demonstrate that SDC-YOLOv8 achieves 78.0% mAP@0.5, 81.0% Precision, and 70.7% Recall while maintaining real-time performance at 85 FPS. Compared to the baseline YOLOv8n model, the proposed method improves mAP@0.5 by 2.0 percentage points, Precision by 3.3 percentage points, and Recall by 1.8 percentage points, yielding an F1 score of 75.5%. These results demonstrate that SDC-YOLOv8 effectively enhances small-target detection accuracy while preserving real-time processing capability, offering a practical and efficient solution for intelligent road defect detection applications. Full article
(This article belongs to the Section Fault Diagnosis & Sensors)
Show Figures

Figure 1

24 pages, 28157 KB  
Article
YOLO-ERCD: An Upgraded YOLO Framework for Efficient Road Crack Detection
by Xiao Li, Ying Chu, Thorsten Chan, Wai Lun Lo and Hong Fu
Sensors 2026, 26(2), 564; https://doi.org/10.3390/s26020564 - 14 Jan 2026
Viewed by 201
Abstract
Efficient and reliable road damage detection is a critical component of intelligent transportation and infrastructure control systems that rely on visual sensing technologies. Existing road damage detection models are facing challenges such as missed detection of fine cracks, poor adaptability to lighting changes, [...] Read more.
Efficient and reliable road damage detection is a critical component of intelligent transportation and infrastructure control systems that rely on visual sensing technologies. Existing road damage detection models are facing challenges such as missed detection of fine cracks, poor adaptability to lighting changes, and false positives under complex backgrounds. In this study, we propose an enhanced YOLO-based framework, YOLO-ERCD, designed to improve the accuracy and robustness of sensor-acquired image data for road crack detection. The datasets used in this work were collected from vehicle-mounted and traffic surveillance camera sensors, representing typical visual sensing systems in automated road inspection. The proposed architecture integrates three key components: (1) a residual convolutional block attention module, which preserves original feature information through residual connections while strengthening spatial and channel feature representation; (2) a channel-wise adaptive gamma correction module that models the nonlinear response of the human visual system to light intensity, adaptively enhancing brightness details for improved robustness under diverse lighting conditions; (3) a visual focus noise modulation module that reduces background interference by selectively introducing noise, emphasizing damage-specific features. These three modules are specifically designed to address the limitations of YOLOv10 in feature representation, lighting adaptation, and background interference suppression, working synergistically to enhance the model’s detection accuracy and robustness, and closely aligning with the practical needs of road monitoring applications. Experimental results on both proprietary and public datasets demonstrate that YOLO-ERCD outperforms recent road damage detection models in accuracy and computational efficiency. The lightweight design also supports real-time deployment on edge sensing and control devices. These findings highlight the potential of integrating AI-based visual sensing and intelligent control, contributing to the development of robust, efficient, and perception-aware road monitoring systems. Full article
Show Figures

Figure 1

23 pages, 2965 KB  
Article
YOLO-LIO: A Real-Time Enhanced Detection and Integrated Traffic Monitoring System for Road Vehicles
by Rachmat Muwardi, Haiyang Zhang, Hongmin Gao, Mirna Yunita, Rizky Rahmatullah, Ahmad Musyafa, Galang Persada Nurani Hakim and Dedik Romahadi
Algorithms 2026, 19(1), 42; https://doi.org/10.3390/a19010042 - 4 Jan 2026
Viewed by 259
Abstract
Traffic violations and road accidents remain significant challenges in developing safe and efficient transportation systems. Despite technological advancements, improving vehicle detection accuracy and enabling real-time traffic management remain critical research priorities. This study proposes YOLO-LIO, an enhanced vehicle detection framework designed to address [...] Read more.
Traffic violations and road accidents remain significant challenges in developing safe and efficient transportation systems. Despite technological advancements, improving vehicle detection accuracy and enabling real-time traffic management remain critical research priorities. This study proposes YOLO-LIO, an enhanced vehicle detection framework designed to address these challenges by improving small-object detection and optimizing real-time deployment. The system introduces multi-scale detection, virtual zone filtering, and efficient preprocessing techniques, including grayscale transformation, Laplacian variance calculation, and median filtering to reduce computational complexity while maintaining high performance. YOLO-LIO was rigorously evaluated on five datasets, GRAM Road-Traffic Monitoring (99.55% accuracy), MAVD-Traffic (99.02%), UA-DETRAC (65.14%), KITTI (94.21%), and an Author Dataset (99.45%), consistently demonstrating superior detection capabilities across diverse traffic scenarios. Additional system features include vehicle counting using a dual-line detection strategy within a virtual zone and speed detection based on frame displacement and camera calibration. These enhancements enable the system to monitor traffic flow and vehicle speeds with high accuracy. YOLO-LIO was successfully deployed on Jetson Nano, a compact, energy-efficient hardware platform, proving its suitability for real-time, low-power embedded applications. The proposed system offers an accurate, scalable, and computationally efficient solution, advancing intelligent transportation systems and improving traffic safety management. Full article
Show Figures

Figure 1

23 pages, 32193 KB  
Article
Object Detection on Road: Vehicle’s Detection Based on Re-Training Models on NVIDIA-Jetson Platform
by Sleiter Ramos-Sanchez, Jinmi Lezama, Ricardo Yauri and Joyce Zevallos
J. Imaging 2026, 12(1), 20; https://doi.org/10.3390/jimaging12010020 - 1 Jan 2026
Viewed by 402
Abstract
The increasing use of artificial intelligence (AI) and deep learning (DL) techniques has driven advances in vehicle classification and detection applications for embedded devices with deployment constraints due to computational cost and response time. In the case of urban environments with high traffic [...] Read more.
The increasing use of artificial intelligence (AI) and deep learning (DL) techniques has driven advances in vehicle classification and detection applications for embedded devices with deployment constraints due to computational cost and response time. In the case of urban environments with high traffic congestion, such as the city of Lima, it is important to determine the trade-off between model accuracy, type of embedded system, and the dataset used. This study was developed using a methodology adapted from the CRISP-DM approach, which included the acquisition of traffic videos in the city of Lima, their segmentation, and manual labeling. Subsequently, three SSD-based detection models (MobileNetV1-SSD, MobileNetV2-SSD-Lite, and VGG16-SSD) were trained on the NVIDIA Jetson Orin NX 16 GB platform. The results show that the VGG16-SSD model achieved the highest average precision (mAP 90.7%), with a longer training time, while the MobileNetV1-SSD (512×512) model achieved comparable performance (mAP 90.4%) with a shorter time. Additionally, data augmentation through contrast adjustment improved the detection of minority classes such as Tuk-tuk and Motorcycle. The results indicate that, among the evaluated models, MobileNetV1-SSD (512×512) achieved the best balance between accuracy and computational load for its implementation in ADAS embedded systems in congested urban environments. Full article
(This article belongs to the Special Issue Advances in Machine Learning for Computer Vision Applications)
Show Figures

Figure 1

23 pages, 1919 KB  
Article
Machine Learning Assessment of Crash Severity in ADS and ADAS-L2 Involved Crashes with NHTSA Data
by Nasim Samadi, Ramina Javid, Sanam Ziaei Ansaroudi, Neda Dehestanimonfared, Mojtaba Naseri and Mansoureh Jeihani
Safety 2026, 12(1), 2; https://doi.org/10.3390/safety12010002 - 23 Dec 2025
Viewed by 490
Abstract
As the deployment of Automated Driving Systems (ADS) and Advanced Driver Assistance Systems (ADAS-L2) expands, understanding their real-world safety performance becomes essential. This study examines the severity and contributing factors of crashes involving vehicles equipped with ADS and ADAS-L2 technologies using NHTSA data. [...] Read more.
As the deployment of Automated Driving Systems (ADS) and Advanced Driver Assistance Systems (ADAS-L2) expands, understanding their real-world safety performance becomes essential. This study examines the severity and contributing factors of crashes involving vehicles equipped with ADS and ADAS-L2 technologies using NHTSA data. Using machine learning models on crash datasets from 2021 to 2024, this research identifies patterns and risk factors influencing injury outcomes. After data preprocessing and handling missing values for severity classification, four models were trained: logistic regression, random forest, SVM, and XGBoost. XGBoost outperformed the others for both ADS and ADAS-L2, achieving the highest accuracy and recall. Variable importance analysis showed that for ADS crashes, interactions with other road users and poor lighting were the strongest predictors of injury severity, while for ADAS-L2 crashes, fixed object collisions and low light conditions were most influential. From a policy and engineering perspective, this study highlights the need for standardized crash reporting and improved ADS object detection and pedestrian response. It also emphasizes effective human–machine interface design and driver training for partial automation. Unlike previous research, this study conducts comparative model-based evaluations of both ADS and ADAS-L2 using recent crash reports to inform safety standards and policy frameworks. Full article
Show Figures

Figure 1

21 pages, 1360 KB  
Article
A Real-Time Consensus-Free Accident Detection Framework for Internet of Vehicles Using Vision Transformer and EfficientNet
by Zineb Seghir, Lyamine Guezouli, Kamel Barka, Djallel Eddine Boubiche, Homero Toral-Cruz and Rafael Martínez-Peláez
AI 2026, 7(1), 4; https://doi.org/10.3390/ai7010004 - 22 Dec 2025
Viewed by 602
Abstract
Objectives: Traffic accidents cause severe social and economic impacts, demanding fast and reliable detection to minimize secondary collisions and improve emergency response. However, existing cloud-dependent detection systems often suffer from high latency and limited scalability, motivating the need for an edge-centric and [...] Read more.
Objectives: Traffic accidents cause severe social and economic impacts, demanding fast and reliable detection to minimize secondary collisions and improve emergency response. However, existing cloud-dependent detection systems often suffer from high latency and limited scalability, motivating the need for an edge-centric and consensus-free accident detection framework in IoV environments. Methods: This study presents a real-time accident detection framework tailored for Internet of Vehicles (IoV) environments. The proposed system forms an integrated IoV architecture combining on-vehicle inference, RSU-based validation, and asynchronous cloud reporting. The system integrates a lightweight ensemble of Vision Transformer (ViT) and EfficientNet models deployed on vehicle nodes to classify video frames. Accident alerts are generated only when both models agree (vehicle-level ensemble consensus), ensuring high precision. These alerts are transmitted to nearby Road Side Units (RSUs), which validate the events and broadcast safety messages without requiring inter-vehicle or inter-RSU consensus. Structured reports are also forwarded asynchronously to the cloud for long-term model retraining and risk analysis. Results: Evaluated on the CarCrash and CADP datasets, the framework achieves an F1-score of 0.96 with average decision latency below 60 ms, corresponding to an overall accuracy of 98.65% and demonstrating measurable improvement over single-model baselines. Conclusions: By combining on-vehicle inference, edge-based validation, and optional cloud integration, the proposed architecture offers both immediate responsiveness and adaptability, contrasting with traditional cloud-dependent approaches. Full article
Show Figures

Figure 1

28 pages, 2342 KB  
Article
Federated Learning-Based Road Defect Detection with Transformer Models for Real-Time Monitoring
by Bushra Abro, Sahil Jatoi, Muhammad Zakir Shaikh, Enrique Nava Baro, Mariofanna Milanova and Bhawani Shankar Chowdhry
Computers 2026, 15(1), 6; https://doi.org/10.3390/computers15010006 - 22 Dec 2025
Viewed by 411
Abstract
This research article presents a novel road defect detection methodology that integrates deep learning techniques and a federated learning approach. Existing road defect detection systems heavily rely on manual inspection and sensor-based techniques, which are prone to errors. To overcome these limitations, a [...] Read more.
This research article presents a novel road defect detection methodology that integrates deep learning techniques and a federated learning approach. Existing road defect detection systems heavily rely on manual inspection and sensor-based techniques, which are prone to errors. To overcome these limitations, a data-acquisition system utilizing a GoPro HERO 9 camera was used to capture high-quality videos and images of road surfaces. A comprehensive dataset consist of multiple road defects, such as cracks, potholes, and uneven surfaces, that were pre-processed and augmented to prepare them for effective model training. A Real-Time Detection Transformer-based architecture model was used that achieved mAP50 of 99.60% and mAP50-95 of 99.55% in cross-validation of road defect detection and object detection tasks. Federated learning helped to train the model in a decentralized manner that enhanced data protection and scalability. The proposed system achieves higher detection accuracy for road defects by increasing speed and efficiency while enhancing scalability, which makes it a potential asset for real-time monitoring. Full article
(This article belongs to the Section AI-Driven Innovations)
Show Figures

Figure 1

27 pages, 5395 KB  
Article
Unraveling the Impact Mechanisms of Built Environment on Urban Vitality: Integrating Scale, Heterogeneity, and Interaction Effects
by Xiji Jiang, Jialin Tian, Jiaqi Li, Dan Ye, Wenlong Lan, Dandan Wu, Naiji Tian and Jie Yin
Buildings 2026, 16(1), 29; https://doi.org/10.3390/buildings16010029 - 21 Dec 2025
Viewed by 362
Abstract
The impact of the built environment on urban vitality is multifaceted, yet a holistic understanding that simultaneously considers its scale dependence, spatial heterogeneity, and interactive mechanisms remains limited. To unravel these multi-scalar mechanisms, this study develops an integrated analytical framework. Taking Xi’an, China, [...] Read more.
The impact of the built environment on urban vitality is multifaceted, yet a holistic understanding that simultaneously considers its scale dependence, spatial heterogeneity, and interactive mechanisms remains limited. To unravel these multi-scalar mechanisms, this study develops an integrated analytical framework. Taking Xi’an, China, as a case study, we first construct a multidimensional built environment indicator system grounded in Jane Jacobs’ theory of vitality. Empirically, we employ the Optimal Parameters-based GeoDetector (OPGD) to objectively identify the optimal spatial scale and detect non-linear and interaction effects. Meanwhile, the Multiscale Geographically Weighted Regression (MGWR) model is used to delineate spatial heterogeneity. Our findings systematically unravel the complex mechanisms: (1) The optimal analysis scale is identified as a 2 km grid; (2) All elements significantly influence vitality, but through distinct linear or non-linear pathways; (3) The effects of attraction density, road network structure, and bus stop density exhibit significant spatial heterogeneity; and (4) Third place density and population density act as key catalysts, non-linearly enhancing the effects of other elements. This research presents a synthesized perspective and nuanced evidence for precision urban regeneration, demonstrating the necessity of integrating scale, heterogeneity, and interaction to understand the drivers of urban vitality. Full article
Show Figures

Figure 1

23 pages, 2909 KB  
Article
A Symmetry-Aware Hierarchical Graph-Mamba Network for Spatio-Temporal Road Damage Detection
by Zichun Tian, Xiaokang Shao, Yuqi Bai, Qianyun Zhang, Zhuxuanzi Wang and Yingrui Ji
Symmetry 2025, 17(12), 2173; https://doi.org/10.3390/sym17122173 - 17 Dec 2025
Viewed by 418
Abstract
The prompt and precise detection of road damage is vital for effective infrastructure management, forming the foundation for intelligent transportation systems and cost-effective pavement maintenance. While current convolutional neural network (CNN)-based methodologies have made progress, they are fundamentally limited by treating damages as [...] Read more.
The prompt and precise detection of road damage is vital for effective infrastructure management, forming the foundation for intelligent transportation systems and cost-effective pavement maintenance. While current convolutional neural network (CNN)-based methodologies have made progress, they are fundamentally limited by treating damages as independent, isolated entities, thereby ignoring the intrinsic spatial symmetry and topological organization inherent in complex damage patterns like alligator cracking. This conceptual asymmetry in modeling leads to two major deficiencies: “context blindness,” which overlooks essential structural interrelations, and “temporal inconsistency” in video analysis, resulting in unstable, flickering predictions. To address this, we propose a Spatio-Temporal Graph Mamba You-Only-Look-Once (STG-Mamba-YOLO) network, a novel architecture that introduces a symmetry-informed, hierarchical reasoning process. Our approach explicitly models and integrates contextual dependencies across three levels to restore a holistic and consistent structural representation. First, at the pixel level, a Mamba state-space model within the YOLO backbone enhances the modeling of long-range spatial dependencies, capturing the elongated symmetry of linear cracks. Second, at the object level, an intra-frame damage Graph Network enables explicit reasoning over the topological symmetry among damage candidates, effectively reducing false positives by leveraging their relational structure. Third, at the sequence level, a Temporal Graph Mamba module tracks the evolution of this damage graph, enforcing temporal symmetry across frames to ensure stable, non-flickering results in video streams. Comprehensive evaluations on multiple public benchmarks demonstrate that our method outperforms existing state-of-the-art approaches. STG-Mamba-YOLO shows significant advantages in identifying intricate damage topologies while ensuring robust temporal stability, thereby validating the effectiveness of our symmetry-guided, multi-level contextual fusion paradigm for structural health monitoring. Full article
Show Figures

Figure 1

33 pages, 5657 KB  
Article
LiDAR-Based Urban Traffic Flow and Safety Assessment Using AI-Driven Surrogate Indicators
by Dohun Kim, Hongjin Kim and Wonjong Kim
Remote Sens. 2025, 17(24), 3989; https://doi.org/10.3390/rs17243989 - 10 Dec 2025
Viewed by 684
Abstract
Urban mobility systems increasingly depend on remote sensing and artificial intelligence to enhance traffic monitoring and safety management. This study presents a LiDAR-based framework for urban road condition analysis and risk evaluation using vehicle-mounted sensors as dynamic remote sensing platforms. The framework integrates [...] Read more.
Urban mobility systems increasingly depend on remote sensing and artificial intelligence to enhance traffic monitoring and safety management. This study presents a LiDAR-based framework for urban road condition analysis and risk evaluation using vehicle-mounted sensors as dynamic remote sensing platforms. The framework integrates deep learning based object detection with mathematically defined surrogate safety indicators to quantify collision risk and evaluate evasive maneuverability in real traffic environments. Two indicators, Hazardous Modified Time to Collision (HMTTC) and Searching for Safety Space (SSS), are introduced to assess lane-level safety and spatial availability of avoidance zones. LiDAR point cloud data are processed using a Voxel RCNN architecture and converted into parameters such as density, speed, and spacing. Field experiments conducted on highways and urban corridors in South Korea reveal strong correlations between HMTTC occurrences, congestion, and geometric road features. The results demonstrate that AI-driven analysis of LiDAR data enables continuous, infrastructure-independent urban traffic safety monitoring, thereby supporting data-driven, resilient transportation systems. Full article
(This article belongs to the Special Issue Applications of AI and Remote Sensing in Urban Systems II)
Show Figures

Figure 1

21 pages, 7741 KB  
Article
Polarization-Guided Deep Fusion for Real-Time Enhancement of Day–Night Tunnel Traffic Scenes: Dataset, Algorithm, and Network
by Renhao Rao, Changcai Cui, Liang Chen, Zhizhao Ouyang and Shuang Chen
Photonics 2025, 12(12), 1206; https://doi.org/10.3390/photonics12121206 - 8 Dec 2025
Viewed by 471
Abstract
The abrupt light-to-dark or dark-to-light transitions at tunnel entrances and exits cause short-term, large-scale illumination changes, leading traditional RGB perception to suffer from exposure mutations, glare, and noise accumulation at critical moments, thereby triggering perception failures and blind zones. Addressing this typical failure [...] Read more.
The abrupt light-to-dark or dark-to-light transitions at tunnel entrances and exits cause short-term, large-scale illumination changes, leading traditional RGB perception to suffer from exposure mutations, glare, and noise accumulation at critical moments, thereby triggering perception failures and blind zones. Addressing this typical failure scenario, this paper proposes a closed-loop enhancement solution centered on polarization imaging as a core physical prior, comprising a real-world polarimetric road dataset, a polarimetric physics-enhanced algorithm, and a beyond-fusion network, while satisfying both perception enhancement and real-time constraints. First, we construct the POLAR-GLV dataset, which is captured using a four-angle polarization camera under real highway tunnel conditions, covering the entire process of entering tunnels, inside tunnels, and exiting tunnels, systematically collecting data on adverse illumination and failure distributions in day–night traffic scenes. Second, we propose the Polarimetric Physical Enhancement with Adaptive Modulation (PPEAM) method, which uses Stokes parameters, DoLP, and AoLP as constraints. Leveraging the glare sensitivity of DoLP and richer texture information, it adaptively performs dark region enhancement and glare suppression according to scene brightness and dark region ratio, providing real-time polarization-based image enhancement. Finally, we design the Polar-PENet beyond-fusion network, which introduces Polarization-Aware Gates (PAG) and CBAM on top of physical priors, coupled with detection-driven perception-oriented loss and a beyond mechanism to explicitly fuse physics and deep semantics to surpass physical limitations. Experimental results show that compared to original images, Polar-PENet (beyond-fusion network) achieves PSNR and SSIM scores of 19.37 and 0.5487, respectively, on image quality metrics, surpassing the performance of PPEAM (polarimetric physics-enhanced algorithm) which scores 18.89 and 0.5257. In terms of downstream object detection performance, Polar-PENet performs exceptionally well in areas with drastic illumination changes such as tunnel entrances and exits, achieving a mAP of 63.7%, representing a 99.7% improvement over original images and a 12.1% performance boost over PPEAM’s 56.8%. In terms of processing speed, Polar-PENet is 2.85 times faster than the physics-enhanced algorithm PPEAM, with an inference speed of 183.45 frames per second, meeting the real-time requirements of autonomous driving and laying a solid foundation for practical deployment in edge computing environments. The research validates the effective paradigm of using polarimetric physics as a prior and surpassing physics through learning methods. Full article
(This article belongs to the Special Issue Computational Optical Imaging: Theories, Algorithms, and Applications)
Show Figures

Figure 1

19 pages, 4054 KB  
Article
DSGF-YOLO: A Lightweight Deep Neural Network for Traffic Accident Detection and Severity Classifications
by Weijun Li, Huawei Xie and Peiteng Lin
Vehicles 2025, 7(4), 153; https://doi.org/10.3390/vehicles7040153 - 5 Dec 2025
Viewed by 521
Abstract
Traffic accidents pose unpredictable and severe social and economic challenges. Rapid and accurate accident detection, along with reliable severity classification, is essential for timely emergency response and improved road safety. This study proposes DSGF-YOLO, an enhanced deep learning framework based on the YOLOv13 [...] Read more.
Traffic accidents pose unpredictable and severe social and economic challenges. Rapid and accurate accident detection, along with reliable severity classification, is essential for timely emergency response and improved road safety. This study proposes DSGF-YOLO, an enhanced deep learning framework based on the YOLOv13 architecture, developed for automated road accident detection and severity classification. The proposed methodology integrates two novel components: the DS-C3K2-FasterNet-Block module, which enhances local feature extraction and computational efficiency, and the Grouped Channel-Wise Self-Attention (G-CSA) module, which strengthens global context modeling and small-object perception. Comprehensive experiments on a diverse traffic accident dataset validate the effectiveness of the proposed framework. The results show that DSGF-YOLO achieves higher precision, recall, and mean average precision than state-of-the-art models such as Faster R-CNN, DETR, and other YOLO variants, while maintaining real-time performance. These findings highlight its potential for intelligent transportation systems and real-world accident monitoring applications. Full article
Show Figures

Figure 1

Back to TopTop