sensors-logo

Journal Browser

Journal Browser

Advances in Intelligent Transportation Systems Based on Sensor Fusion

A special issue of Sensors (ISSN 1424-8220). This special issue belongs to the section "Vehicular Sensing".

Deadline for manuscript submissions: 30 May 2024 | Viewed by 27417

Special Issue Editors


E-Mail Website
Guest Editor
Roy M. Huffington Department of Earth Sciences, Southern Methodist University, Dallas, TX 75275, USA
Interests: computer vision; machine learning; sensor fusion; intelligent transportation systems; autonomous vehicles
Chongqing Key Laboratory of Computational Intelligence, Chongqing University of Posts and Telecommunications, Chongqing 400065, China
Interests: multi-modal sensor fusion; natural language processing
School of Qilu Transportation, Shandong University, Jinan 250061, China
Interests: intersection research of computer vision; artificial intelligence in transportation infrastructure; image processing; non-destructive testing

E-Mail Website
Guest Editor
College of Computer Science, Chongqing University, Chongqing 400044, China
Interests: internet of vehicles; big data; pervasive computing
Special Issues, Collections and Topics in MDPI journals

Special Issue Information

Dear Colleagues,

The recent research on sensor fusion has received significant attention as components of intelligent transportation systems (ITS). The modern ITS aims to improve the effectiveness, efficiency, reliability, and safety of road, rail (and other modes of) transports, traffic management, mobility, and increase transportation capacity to reduce commute time. Sensor fusion is a combination of techniques and knowledge from multiple sensing sources, such as image sensors, vision/camera-based sensors, acoustic sensor, physical sensors, sensing devices, etc., as mutual supplements to facilitate the decision-making in ITS. The heterogeneous sensor data from the multiple sources has provided comprehensive insights into constructing the next-generation ITS, however, it has also resulted in additional challenges.

This Special Issue is, therefore, devoted to recent advances on all perspectives of sensor fusion techniques for ITS, in a type of state-of-the-art review, theoretical contributions, and practical industrial applications. The topics for this Special Issue should relate to fusion of multiple sensors for ITS and other modes of transport, and include but are not limited to:

  • Autonomous vehicles;
  • Driving assistance;
  • Surveillance infrastructure;
  • Traffic flow characteristics;
  • Vehicular communication (vehicle-to-everything);
  • Electric vehicles;
  • Vehicle robotics and control systems;
  • Transportation;
  • Multi-modal signal processing and analysis.

Dr. Xinxiang Zhang
Dr. Ye Wang
Dr. Feng Guo
Prof. Dr. Kai Liu
Guest Editors

Manuscript Submission Information

Manuscripts should be submitted online at www.mdpi.com by registering and logging in to this website. Once you are registered, click here to go to the submission form. Manuscripts can be submitted until the deadline. All submissions that pass pre-check are peer-reviewed. Accepted papers will be published continuously in the journal (as soon as accepted) and will be listed together on the special issue website. Research articles, review articles as well as short communications are invited. For planned papers, a title and short abstract (about 100 words) can be sent to the Editorial Office for announcement on this website.

Submitted manuscripts should not have been published previously, nor be under consideration for publication elsewhere (except conference proceedings papers). All manuscripts are thoroughly refereed through a single-blind peer-review process. A guide for authors and other relevant information for submission of manuscripts is available on the Instructions for Authors page. Sensors is an international peer-reviewed open access semimonthly journal published by MDPI.

Please visit the Instructions for Authors page before submitting a manuscript. The Article Processing Charge (APC) for publication in this open access journal is 2600 CHF (Swiss Francs). Submitted papers should be well formatted and use good English. Authors may use MDPI's English editing service prior to publication or during author revisions.

Keywords

  • autonomous vehicles
  • driving assistance
  • surveillance infrastructure
  • traffic flow characteristics
  • vehicular communication (vehicle-to-everything)
  • electric vehicles
  • vehicle robotics and control systems
  • transportation
  • multi-modal signal processing and analysis

Published Papers (10 papers)

Order results
Result details
Select all
Export citation of selected articles as:

Research

Jump to: Review

24 pages, 2966 KiB  
Article
RTAIAED: A Real-Time Ambulance in an Emergency Detector with a Pyramidal Part-Based Model Composed of MFCCs and YOLOv8
by Alessandro Mecocci and Claudio Grassi
Sensors 2024, 24(7), 2321; https://doi.org/10.3390/s24072321 - 05 Apr 2024
Viewed by 423
Abstract
In emergency situations, every second counts for an ambulance navigating through traffic. Efficient use of traffic light systems can play a crucial role in minimizing response time. This paper introduces a novel automated Real-Time Ambulance in an Emergency Detector (RTAIAED). The proposed system [...] Read more.
In emergency situations, every second counts for an ambulance navigating through traffic. Efficient use of traffic light systems can play a crucial role in minimizing response time. This paper introduces a novel automated Real-Time Ambulance in an Emergency Detector (RTAIAED). The proposed system uses special Lookout Stations (LSs) suitably positioned at a certain distance from each involved traffic light (TL), to obtain timely and safe transitions to green lights as the Ambulance in an Emergency (AIAE) approaches. The foundation of the proposed system is built on the simultaneous processing of video and audio data. The video analysis is inspired by the Part-Based Model theory integrating tailored video detectors that leverage a custom YOLOv8 model for enhanced precision. Concurrently the audio analysis component employs a neural network designed to analyze Mel Frequency Cepstral Coefficients (MFCCs) providing an accurate classification of auditory information. This dual-faceted approach facilitates a cohesive and synergistic analysis of sensory inputs. It incorporates a logic-based component to integrate and interpret the detections from each sensory channel, thereby ensuring the precise identification of an AIAE as it approaches a traffic light. Extensive experiments confirm the robustness of the approach and its reliable application in real-world scenarios thanks to its predictions in real time (reaching an fps of 11.8 on a Jetson Nano and a response time up to 0.25 s), showcasing the ability to detect AIAEs even in challenging conditions, such as noisy environments, nighttime, or adverse weather conditions, provided a suitable-quality camera is appropriately positioned. The RTAIAED is particularly effective on one-way roads, addressing the challenge of regulating the sequence of traffic light signals so as to ensure a green signal to the AIAE when arriving in front of the TL, despite the presence of the “double red” periods in which the one-way traffic is cleared of vehicles coming from one direction before allowing those coming from the other side. Also, it is suitable for managing temporary situations, like in the case of roadworks. Full article
(This article belongs to the Special Issue Advances in Intelligent Transportation Systems Based on Sensor Fusion)
Show Figures

Figure 1

15 pages, 4510 KiB  
Article
Knowledge Distillation for Traversable Region Detection of LiDAR Scan in Off-Road Environments
by Nahyeong Kim and Jhonghyun An
Sensors 2024, 24(1), 79; https://doi.org/10.3390/s24010079 - 22 Dec 2023
Viewed by 731
Abstract
In this study, we propose a knowledge distillation (KD) method for segmenting off-road environment range images. Unlike urban environments, off-road terrains are irregular and pose a higher risk to hardware. Therefore, off-road self-driving systems are required to be computationally efficient. We used LiDAR [...] Read more.
In this study, we propose a knowledge distillation (KD) method for segmenting off-road environment range images. Unlike urban environments, off-road terrains are irregular and pose a higher risk to hardware. Therefore, off-road self-driving systems are required to be computationally efficient. We used LiDAR point cloud range images to address this challenge. The three-dimensional (3D) point cloud data, which are rich in detail, require substantial computational resources. To mitigate this problem, we employ a projection method to convert the image into a two-dimensional (2D) image format using depth information. Our soft label-based knowledge distillation (SLKD) effectively transfers knowledge from a large teacher network to a lightweight student network. We evaluated SLKD using the RELLIS-3D off-road environment dataset, measuring the performance with respect to the mean intersection of union (mIoU) and GPU floating point operations per second (GFLOPS). The experimental results demonstrate that SLKD achieves a favorable trade-off between mIoU and GFLOPS when comparing teacher and student networks. This approach shows promise for enabling efficient off-road autonomous systems with reduced computational costs. Full article
(This article belongs to the Special Issue Advances in Intelligent Transportation Systems Based on Sensor Fusion)
Show Figures

Figure 1

17 pages, 3035 KiB  
Article
ADSTGCN: A Dynamic Adaptive Deeper Spatio-Temporal Graph Convolutional Network for Multi-Step Traffic Forecasting
by Zhengyan Cui, Junjun Zhang, Giseop Noh and Hyun Jun Park
Sensors 2023, 23(15), 6950; https://doi.org/10.3390/s23156950 - 04 Aug 2023
Cited by 1 | Viewed by 934
Abstract
Multi-step traffic forecasting has always been extremely challenging due to constantly changing traffic conditions. Advanced Graph Convolutional Networks (GCNs) are widely used to extract spatial information from traffic networks. Existing GCNs for traffic forecasting are usually shallow networks that only aggregate two- or [...] Read more.
Multi-step traffic forecasting has always been extremely challenging due to constantly changing traffic conditions. Advanced Graph Convolutional Networks (GCNs) are widely used to extract spatial information from traffic networks. Existing GCNs for traffic forecasting are usually shallow networks that only aggregate two- or three-order node neighbor information. Because of aggregating deeper neighborhood information, an over-smoothing phenomenon occurs, thus leading to the degradation of model forecast performance. In addition, most existing traffic forecasting graph networks are based on fixed nodes and therefore need more flexibility. Based on the current problem, we propose Dynamic Adaptive Deeper Spatio-Temporal Graph Convolutional Networks (ADSTGCN), a new traffic forecasting model. The model addresses over-smoothing due to network deepening by using dynamic hidden layer connections and adaptively adjusting the hidden layer weights to reduce model degradation. Furthermore, the model can adaptively learn the spatial dependencies in the traffic graph by building the parameter-sharing adaptive matrix, and it can also adaptively adjust the network structure to discover the unknown dynamic changes in the traffic network. We evaluated ADSTGCN using real-world traffic data from the highway and urban road networks, and it shows good performance. Full article
(This article belongs to the Special Issue Advances in Intelligent Transportation Systems Based on Sensor Fusion)
Show Figures

Figure 1

21 pages, 5602 KiB  
Article
Accuracy Improvement of Braking Force via Deceleration Feedback Functions Applied to Braking Systems
by Yuzhu Wang, Xiyuan Wen, Hongfang Meng, Xiang Zhang, Ruizhe Li and Roger Serra
Sensors 2023, 23(13), 5975; https://doi.org/10.3390/s23135975 - 27 Jun 2023
Cited by 1 | Viewed by 1126
Abstract
Currently, braking control systems used in regional railways are open-loop systems, such as metro and tramways. Given that the performance of braking can be influenced by issues such as wheel sliding or the properties of the friction components present in brake systems, our [...] Read more.
Currently, braking control systems used in regional railways are open-loop systems, such as metro and tramways. Given that the performance of braking can be influenced by issues such as wheel sliding or the properties of the friction components present in brake systems, our study puts forward a novel closed-loop mechanism to autonomously stabilize braking performance. It is able to keep train deceleration close to the target values required by the braking control unit (BCU), especially in terms of the electrical–pneumatic braking transform process. This method fully considers the friction efficiency characteristics of brake pads and encompasses running tests using rolling stock. The test results show that the technique is able to stabilize the actual deceleration at a closer rate to the target deceleration than before and avoid wheel sliding protection (WSP) action, especially during low-speed periods. Full article
(This article belongs to the Special Issue Advances in Intelligent Transportation Systems Based on Sensor Fusion)
Show Figures

Figure 1

15 pages, 4845 KiB  
Article
Sensor Fusion-Based Vehicle Detection and Tracking Using a Single Camera and Radar at a Traffic Intersection
by Shenglin Li and Hwan-Sik Yoon
Sensors 2023, 23(10), 4888; https://doi.org/10.3390/s23104888 - 19 May 2023
Cited by 4 | Viewed by 4411
Abstract
Recent advancements in sensor technologies, in conjunction with signal processing and machine learning, have enabled real-time traffic control systems to adapt to varying traffic conditions. This paper introduces a new sensor fusion approach that combines data from a single camera and radar to [...] Read more.
Recent advancements in sensor technologies, in conjunction with signal processing and machine learning, have enabled real-time traffic control systems to adapt to varying traffic conditions. This paper introduces a new sensor fusion approach that combines data from a single camera and radar to achieve cost-effective and efficient vehicle detection and tracking. Initially, vehicles are independently detected and classified using the camera and radar. Then, the constant-velocity model within a Kalman filter is employed to predict vehicle locations, while the Hungarian algorithm is used to associate these predictions with sensor measurements. Finally, vehicle tracking is accomplished by merging kinematic information from predictions and measurements through the Kalman filter. A case study conducted at an intersection demonstrates the effectiveness of the proposed sensor fusion method for traffic detection and tracking, including performance comparisons with individual sensors. Full article
(This article belongs to the Special Issue Advances in Intelligent Transportation Systems Based on Sensor Fusion)
Show Figures

Figure 1

23 pages, 12903 KiB  
Article
Sensor Fusion in Autonomous Vehicle with Traffic Surveillance Camera System: Detection, Localization, and AI Networking
by Muhammad Hasanujjaman, Mostafa Zaman Chowdhury and Yeong Min Jang
Sensors 2023, 23(6), 3335; https://doi.org/10.3390/s23063335 - 22 Mar 2023
Cited by 9 | Viewed by 6779
Abstract
Complete autonomous systems such as self-driving cars to ensure the high reliability and safety of humans need the most efficient combination of four-dimensional (4D) detection, exact localization, and artificial intelligent (AI) networking to establish a fully automated smart transportation system. At present, multiple [...] Read more.
Complete autonomous systems such as self-driving cars to ensure the high reliability and safety of humans need the most efficient combination of four-dimensional (4D) detection, exact localization, and artificial intelligent (AI) networking to establish a fully automated smart transportation system. At present, multiple integrated sensors such as light detection and ranging (LiDAR), radio detection and ranging (RADAR), and car cameras are frequently used for object detection and localization in the conventional autonomous transportation system. Moreover, the global positioning system (GPS) is used for the positioning of autonomous vehicles (AV). These individual systems’ detection, localization, and positioning efficiency are insufficient for AV systems. In addition, they do not have any reliable networking system for self-driving cars carrying us and goods on the road. Although the sensor fusion technology of car sensors came up with good efficiency for detection and location, the proposed convolutional neural networking approach will assist to achieve a higher accuracy of 4D detection, precise localization, and real-time positioning. Moreover, this work will establish a strong AI network for AV far monitoring and data transmission systems. The proposed networking system efficiency remains the same on under-sky highways as well in various tunnel roads where GPS does not work properly. For the first time, modified traffic surveillance cameras have been exploited in this conceptual paper as an external image source for AV and anchor sensing nodes to complete AI networking transportation systems. This work approaches a model that solves AVs’ fundamental detection, localization, positioning, and networking challenges with advanced image processing, sensor fusion, feathers matching, and AI networking technology. This paper also provides an experienced AI driver concept for a smart transportation system with deep learning technology. Full article
(This article belongs to the Special Issue Advances in Intelligent Transportation Systems Based on Sensor Fusion)
Show Figures

Figure 1

20 pages, 5744 KiB  
Article
Future Technologies for Train Communication: The Role of LEO HTS Satellites in the Adaptable Communication System
by Alessandro Vizzarri, Franco Mazzenga and Romeo Giuliano
Sensors 2023, 23(1), 68; https://doi.org/10.3390/s23010068 - 21 Dec 2022
Cited by 4 | Viewed by 1755
Abstract
The railway sector has been characterized by important innovations regarding digital technologies for train-to-ground communications. The actual GSM-R system is considered an obsolescent technology expected to be dismissed by 2030. The future communication systems in the rail sectors, such as Adaptable Communication Systems [...] Read more.
The railway sector has been characterized by important innovations regarding digital technologies for train-to-ground communications. The actual GSM-R system is considered an obsolescent technology expected to be dismissed by 2030. The future communication systems in the rail sectors, such as Adaptable Communication Systems (ACS) and Future Railway Mobile Communication Systems (FRMCS), can manage different bearers as 4G/5G terrestrial technologies and satellites. In this environment, the new High Throughput Satellite (HTS) Low-Earth Orbit (LEO) constellations promise very interesting performances from data rate and coverage points of view. The paper analyzes the LEO constellations of Starlink and OneWeb using public data. The Rome–Florence railway line is considered for simulations. The results evidence the LEO satellite can provide interesting performance in terms of visibility, service connectivity, and traffic capacities (up to 1 Gbps). This feature enables the LEO to fully manage a high amount of data, especially in the railway scenarios of the next years when video data applications will be more present. Full article
(This article belongs to the Special Issue Advances in Intelligent Transportation Systems Based on Sensor Fusion)
Show Figures

Figure 1

22 pages, 5795 KiB  
Article
Parallax Inference for Robust Temporal Monocular Depth Estimation in Unstructured Environments
by Michaël Fonder, Damien Ernst and Marc Van Droogenbroeck
Sensors 2022, 22(23), 9374; https://doi.org/10.3390/s22239374 - 01 Dec 2022
Cited by 2 | Viewed by 1970
Abstract
Estimating the distance to objects is crucial for autonomous vehicles, but cost, weight or power constraints sometimes prevent the use of dedicated depth sensors. In this case, the distance has to be estimated from on-board mounted RGB cameras, which is a complex task [...] Read more.
Estimating the distance to objects is crucial for autonomous vehicles, but cost, weight or power constraints sometimes prevent the use of dedicated depth sensors. In this case, the distance has to be estimated from on-board mounted RGB cameras, which is a complex task especially for environments such as natural outdoor landscapes. In this paper, we present a new depth estimation method suitable for use in such landscapes. First, we establish a bijective relationship between depth and the visual parallax of two consecutive frames and show how to exploit it to perform motion-invariant pixel-wise depth estimation. Then, we detail our architecture which is based on a pyramidal convolutional neural network where each level refines an input parallax map estimate by using two customized cost volumes. We use these cost volumes to leverage the visual spatio-temporal constraints imposed by motion and make the network robust for varied scenes. We benchmarked our approach both in test and generalization modes on public datasets featuring synthetic camera trajectories recorded in a wide variety of outdoor scenes. Results show that our network outperforms the state of the art on these datasets, while also performing well on a standard depth estimation benchmark. Full article
(This article belongs to the Special Issue Advances in Intelligent Transportation Systems Based on Sensor Fusion)
Show Figures

Figure 1

37 pages, 50339 KiB  
Article
Development of an Autonomous Driving Vehicle for Garbage Collection in Residential Areas
by Jeong-Won Pyo, Sang-Hyeon Bae, Sung-Hyeon Joo, Mun-Kyu Lee, Arpan Ghosh and Tae-Yong Kuc
Sensors 2022, 22(23), 9094; https://doi.org/10.3390/s22239094 - 23 Nov 2022
Cited by 3 | Viewed by 2683
Abstract
Autonomous driving and its real-world implementation have been among the most actively studied topics in the past few years. In recent years, this growth has been accelerated by the development of advanced deep learning-based data processing technologies. Moreover, large automakers manufacture vehicles that [...] Read more.
Autonomous driving and its real-world implementation have been among the most actively studied topics in the past few years. In recent years, this growth has been accelerated by the development of advanced deep learning-based data processing technologies. Moreover, large automakers manufacture vehicles that can achieve partially or fully autonomous driving for driving on real roads. However, self-driving cars are limited to some areas with multi-lane roads, such as highways, and self-driving cars that drive in urban areas or residential complexes are still in the development stage. Among autonomous vehicles for various purposes, this paper focused on the development of autonomous vehicles for garbage collection in residential areas. Since we set the target environment of the vehicle as a residential complex, there is a difference from the target environment of a general autonomous vehicle. Therefore, in this paper, we defined ODD, including vehicle length, speed, and driving conditions for the development vehicle to drive in a residential area. In addition, to recognize the vehicle’s surroundings and respond to various situations, it is equipped with various sensors and additional devices that can notify the outside of the vehicle’s state or operate it in an emergency. In addition, an autonomous driving system capable of object recognition, lane recognition, route planning, vehicle manipulation, and abnormal situation detection was configured to suit the vehicle hardware and driving environment configured in this way. Finally, by performing autonomous driving in the actual experimental section with the developed vehicle, it was confirmed that the function of autonomous driving in the residential area works appropriately. Moreover, we confirmed that this vehicle would support garbage collection works through the experiment of work efficiency. Full article
(This article belongs to the Special Issue Advances in Intelligent Transportation Systems Based on Sensor Fusion)
Show Figures

Figure 1

Review

Jump to: Research

30 pages, 2729 KiB  
Review
A Survey and Tutorial on Network Optimization for Intelligent Transport System Using the Internet of Vehicles
by Saroj Kumar Panigrahy and Harika Emany
Sensors 2023, 23(1), 555; https://doi.org/10.3390/s23010555 - 03 Jan 2023
Cited by 13 | Viewed by 5273
Abstract
The Internet of Things (IoT) has risen from ubiquitous computing to the Internet itself. Internet of vehicles (IoV) is the next emerging trend in IoT. We can build intelligent transportation systems (ITS) using IoV. However, overheads are imposed on IoV network due to [...] Read more.
The Internet of Things (IoT) has risen from ubiquitous computing to the Internet itself. Internet of vehicles (IoV) is the next emerging trend in IoT. We can build intelligent transportation systems (ITS) using IoV. However, overheads are imposed on IoV network due to a massive quantity of information being transferred from the devices connected in IoV. One such overhead is the network connection between the units of an IoV. To make an efficient ITS using IoV, optimization of network connectivity is required. A survey on network optimization in IoT and IoV is presented in this study. It also highlights the backdrop of IoT and IoV. This includes the applications, such as ITS with comparison to different advancements, optimization of the network, IoT discussions, along with categorization of algorithms. Some of the simulation tools are also explained which will help the research community to use those tools for pursuing research in IoV. Full article
(This article belongs to the Special Issue Advances in Intelligent Transportation Systems Based on Sensor Fusion)
Show Figures

Figure 1

Back to TopTop