sensors-logo

Journal Browser

Journal Browser

Sensors and Sensing Technologies for Traffic, Driving and Transportation

A special issue of Sensors (ISSN 1424-8220). This special issue belongs to the section "Sensor Networks".

Deadline for manuscript submissions: 10 June 2025 | Viewed by 4763

Special Issue Editors


E-Mail Website
Guest Editor
Department of Computer Science and Physics, Rider University, New Jersey, NJ 08648, USA
Interests: computer vision; deep learning; robotics perception; tracking; intelligent transportation systems (ITSs); sensors and embedded systems

E-Mail Website
Guest Editor
Computer Science and Engineering, School of Computing and Augmented Intelligence, Ira. A. Fulton Schools of Engineering, Arizona State University, Tempe, AZ 85281, USA
Interests: event-based vision (neuromorphic vision); artificial neural networks; sensors and multimodal sensing; intelligent transportation systems

Special Issue Information

Dear Colleagues,

You are invited to submit to this Special Issue, entitled “Sensors and Sensing Technologies for Traffic, Driving and Transportation”, of Sensors.

The advancement of sensors and sensing technologies for traffic, driving, and transportation has transformed the landscape of intelligent transportation systems (ITSs) and intelligent vehicles. With rapid developments in sensor technologies such as LiDAR, radar, cameras, and V2X communication, modern vehicles and infrastructure are becoming increasingly connected and autonomous. These innovations have enabled safer, more efficient, and smarter transportation systems, where real-time traffic monitoring, autonomous driving, and advanced driver-assistance systems (ADASs) are now integral components. As a result, sensors are playing a crucial role in enhancing road safety, reducing congestion, and improving the overall transportation experience.

This Special Issue aims to highlight innovative research on novel sensors, computer vision, artificial intelligence, sensor data processing, sensor data visualization, and the application of sensing technologies in intelligent transportation systems (ITSs) and intelligent vehicles (IVs). We welcome contributions from all fields related to sensors and sensing technologies applied in traffic, driving, and transportation, including, but not limited to, the following:

  • Novel sensor modalities (LiDAR, radar, cameras, etc.);
  • Data fusion for multiple sensors or multiple sensing modalities;
  • Collaborative sensing and V2X communication;
  • Sensing technologies for autonomous driving systems;
  • Sensing technologies for advanced driver-assistance systems (ADASs);
  • Environmental sensing for intelligent transportation;
  • Smart infrastructure with sensing capabilities;
  • Safety and security of sensing technologies;
  • Artificial intelligence in sensing technologies;
  • Datasets and data collection methods;
  • Applications of sensing technologies in ITSs and IVs. 

Dr. Duo Lu
Dr. Bharatesh Chakravarthi
Guest Editors

Manuscript Submission Information

Manuscripts should be submitted online at www.mdpi.com by registering and logging in to this website. Once you are registered, click here to go to the submission form. Manuscripts can be submitted until the deadline. All submissions that pass pre-check are peer-reviewed. Accepted papers will be published continuously in the journal (as soon as accepted) and will be listed together on the special issue website. Research articles, review articles as well as short communications are invited. For planned papers, a title and short abstract (about 100 words) can be sent to the Editorial Office for announcement on this website.

Submitted manuscripts should not have been published previously, nor be under consideration for publication elsewhere (except conference proceedings papers). All manuscripts are thoroughly refereed through a single-blind peer-review process. A guide for authors and other relevant information for submission of manuscripts is available on the Instructions for Authors page. Sensors is an international peer-reviewed open access semimonthly journal published by MDPI.

Please visit the Instructions for Authors page before submitting a manuscript. The Article Processing Charge (APC) for publication in this open access journal is 2600 CHF (Swiss Francs). Submitted papers should be well formatted and use good English. Authors may use MDPI's English editing service prior to publication or during author revisions.

Keywords

  • sensors
  • sensing technologies
  • sensor fusion
  • intelligent transportation systems (ITSs)
  • intelligent vehicles (IVs)
  • artificial intelligence (AI)
  • datasets

Benefits of Publishing in a Special Issue

  • Ease of navigation: Grouping papers by topic helps scholars navigate broad scope journals more efficiently.
  • Greater discoverability: Special Issues support the reach and impact of scientific research. Articles in Special Issues are more discoverable and cited more frequently.
  • Expansion of research network: Special Issues facilitate connections among authors, fostering scientific collaborations.
  • External promotion: Articles in Special Issues are often promoted through the journal's social media, increasing their visibility.
  • e-Book format: Special Issues with more than 10 articles can be published as dedicated e-books, ensuring wide and rapid dissemination.

Further information on MDPI's Special Issue policies can be found here.

Published Papers (5 papers)

Order results
Result details
Select all
Export citation of selected articles as:

Research

Jump to: Review, Other

16 pages, 2523 KiB  
Article
On-Road Evaluation of an Unobtrusive In-Vehicle Pressure-Based Driver Respiration Monitoring System
by Sparsh Jain and Miguel A. Perez
Sensors 2025, 25(9), 2739; https://doi.org/10.3390/s25092739 - 26 Apr 2025
Viewed by 168
Abstract
In-vehicle physiological sensing is emerging as a vital approach to enhancing driver monitoring and overall automotive safety. This pilot study explores the feasibility of a pressure-based system, repurposing commonplace occupant classification electronics to capture respiration signals during real-world driving. Data were collected from [...] Read more.
In-vehicle physiological sensing is emerging as a vital approach to enhancing driver monitoring and overall automotive safety. This pilot study explores the feasibility of a pressure-based system, repurposing commonplace occupant classification electronics to capture respiration signals during real-world driving. Data were collected from a driver-seat-embedded, fluid-filled pressure bladder sensor during normal on-road driving. The sensor output was processed using simple filtering techniques to isolate low-amplitude respiratory signals from substantial background noise and motion artifacts. The experimental results indicate that the system reliably detects the respiration rate despite the dynamic environment, achieving a mean absolute error of 1.5 breaths per minute with a standard deviation of 1.87 breaths per minute (9.2% of the mean true respiration rate), thereby bridging the gap between controlled laboratory tests and real-world automotive deployment. These findings support the potential integration of unobtrusive physiological monitoring into driver state monitoring systems, which can aid in the early detection of fatigue and impairment, enhance post-crash triage through timely vital sign transmission, and extend to monitoring other vehicle occupants. This study contributes to the development of robust and cost-effective in-cabin sensor systems that have the potential to improve road safety and health monitoring in automotive settings. Full article
Show Figures

Figure 1

22 pages, 7037 KiB  
Article
Research on Comprehensive Vehicle Information Detection Technology Based on Single-Point Laser Ranging
by Haiyu Chen, Xin Wen, Yunbo Liu and Hui Zhang
Sensors 2025, 25(5), 1303; https://doi.org/10.3390/s25051303 - 20 Feb 2025
Viewed by 433
Abstract
In response to the limitations of existing vehicle detection technologies when applied to distributed sensor networks for road traffic holographic perception, this paper proposes a vehicle information detection technology based on single-point laser ranging. The system uses two single-point laser radars with fixed [...] Read more.
In response to the limitations of existing vehicle detection technologies when applied to distributed sensor networks for road traffic holographic perception, this paper proposes a vehicle information detection technology based on single-point laser ranging. The system uses two single-point laser radars with fixed angles, combined with an adaptive threshold state machine and waveform segmentation fusion, to accurately detect vehicle speed, lane position, and other parameters. Compared with traditional methods, this technology offers advantages such as richer detection dimensions, low cost, and ease of installation and maintenance, making it suitable for large-scale traffic monitoring on secondary roads, highways, and suburban roads. Experimental results show that the system achieves high accuracy and reliability in low-to-medium-traffic flow scenarios, demonstrating its potential for intelligent road traffic applications. Full article
Show Figures

Figure 1

21 pages, 29624 KiB  
Article
Object Detection and Classification Framework for Analysis of Video Data Acquired from Indian Roads
by Aayushi Padia, Aryan T. N., Sharan Thummagunti, Vivaan Sharma, Manjunath K. Vanahalli, Prabhu Prasad B. M., Girish G. N., Yong-Guk Kim and Pavan Kumar B. N.
Sensors 2024, 24(19), 6319; https://doi.org/10.3390/s24196319 - 29 Sep 2024
Viewed by 2029
Abstract
Object detection and classification in autonomous vehicles are crucial for ensuring safe and efficient navigation through complex environments. This paper addresses the need for robust detection and classification algorithms tailored specifically for Indian roads, which present unique challenges such as diverse traffic patterns, [...] Read more.
Object detection and classification in autonomous vehicles are crucial for ensuring safe and efficient navigation through complex environments. This paper addresses the need for robust detection and classification algorithms tailored specifically for Indian roads, which present unique challenges such as diverse traffic patterns, erratic driving behaviors, and varied weather conditions. Despite significant progress in object detection and classification for autonomous vehicles, existing methods often struggle to generalize effectively to the conditions encountered on Indian roads. This paper proposes a novel approach utilizing the YOLOv8 deep learning model, designed to be lightweight, scalable, and efficient for real-time implementation using onboard cameras. Experimental evaluations were conducted using real-life scenarios encompassing diverse weather and traffic conditions. Videos captured in various environments were utilized to assess the model’s performance, with particular emphasis on its accuracy and precision across 35 distinct object classes. The experiments demonstrate a precision of 0.65 for the detection of multiple classes, indicating the model’s efficacy in handling a wide range of objects. Moreover, real-time testing revealed an average accuracy exceeding 70% across all scenarios, with a peak accuracy of 95% achieved in optimal conditions. The parameters considered in the evaluation process encompassed not only traditional metrics but also factors pertinent to Indian road conditions, such as low lighting, occlusions, and unpredictable traffic patterns. The proposed method exhibits superiority over existing approaches by offering a balanced trade-off between model complexity and performance. By leveraging the YOLOv8 architecture, this solution achieved high accuracy while minimizing computational resources, making it well suited for deployment in autonomous vehicles operating on Indian roads. Full article
Show Figures

Figure 1

Review

Jump to: Research, Other

38 pages, 3079 KiB  
Review
Building the Future of Transportation: A Comprehensive Survey on AV Perception, Localization, and Mapping
by Ashok Kumar Patil, Bhargav Punugupati, Himanshi Gupta, Niranjan S. Mayur, Srivatsa Ramesh and Prasad B. Honnavalli
Sensors 2025, 25(7), 2004; https://doi.org/10.3390/s25072004 - 23 Mar 2025
Viewed by 557
Abstract
Autonomous vehicles (AVs) depend on perception, localization, and mapping to interpret their surroundings and navigate safely. This paper reviews existing methodologies and best practices in these domains, focusing on object detection, object tracking, localization techniques, and environmental mapping strategies. In the perception module, [...] Read more.
Autonomous vehicles (AVs) depend on perception, localization, and mapping to interpret their surroundings and navigate safely. This paper reviews existing methodologies and best practices in these domains, focusing on object detection, object tracking, localization techniques, and environmental mapping strategies. In the perception module, we analyze state-of-the-art object detection frameworks, such as You Only Look Once version 8 (YOLOv8), and object tracking algorithms like ByteTrack and BoT-SORT (Boosted SORT). We assess their real-time performance, robustness to occlusions, and suitability for complex urban environments. We examine different approaches for localization, including Light Detection and Ranging (LiDAR)-based localization, camera-based localization, and sensor fusion techniques. These methods enhance positional accuracy, particularly in scenarios where Global Positioning System (GPS) signals are unreliable or unavailable. The mapping section explores Simultaneous Localization and Mapping (SLAM) techniques and high-definition (HD) maps, discussing their role in creating detailed, real-time environmental representations that enable autonomous navigation. Additionally, we present insights from our testing, evaluating the effectiveness of different perception, localization, and mapping methods in real-world conditions. By summarizing key advancements, challenges, and practical considerations, this paper provides a reference for researchers and developers working on autonomous vehicle perception, localization, and mapping. Full article
Show Figures

Figure 1

Other

Jump to: Research, Review

53 pages, 2552 KiB  
Systematic Review
Understanding Cyclists’ Visual Behavior Using Eye-Tracking Technology: A Systematic Review
by Fatima Kchour, Salvatore Cafiso and Giuseppina Pappalardo
Sensors 2025, 25(1), 22; https://doi.org/10.3390/s25010022 - 24 Dec 2024
Viewed by 1097
Abstract
Eye-tracking technologies are emerging in research aiming to understand the visual behavior of cyclists to improve their safety. These technologies gather real-time information to reveal what the cyclists look at and how they respond at a specific location and time. This systematic review [...] Read more.
Eye-tracking technologies are emerging in research aiming to understand the visual behavior of cyclists to improve their safety. These technologies gather real-time information to reveal what the cyclists look at and how they respond at a specific location and time. This systematic review investigates the use of eye-tracking systems to improve cyclist safety. An extensive search of the SCOPUS and WoS databases, following the PRISMA 2020 guidelines, found 610 studies published between 2010 and 2024. After filtering these studies according to predefined inclusion and exclusion criteria, 25 were selected for final review. The included studies were conducted in real traffic or virtual environments aiming to assess visual attention, workload, or hazard perception. Studies focusing on other types of road users or participants not involved in active cycling were excluded. Results reveal the important impact of road elements’ design, traffic density, and weather conditions on cyclists’ gaze patterns. Significant visual workload is imposed mainly by intersections. Along with the valuable insights into cyclist safety, potential biases related to small sample sizes and technological limitations were identified. Recommendations for future research are discussed to address these challenges through more diverse samples, advanced technologies, and a greater focus on peripheral vision. Full article
Show Figures

Figure 1

Back to TopTop