sensors-logo

Journal Browser

Journal Browser

Special Issue "Intelligent Sensing Systems for Vehicle"

A special issue of Sensors (ISSN 1424-8220). This special issue belongs to the section "Intelligent Sensors".

Deadline for manuscript submissions: 30 April 2021.

Special Issue Editors

Dr. Humberto Martinez Barbera
Website
Guest Editor
Information and communication engineering, School of computer science, University of Murcia, 30100 Murcia, Spain
Interests: computational methods; distributed control architectures and software
Dr. David Herrero Pérez
Website
Guest Editor
Computational Mechanics and Scientific Computing Group, Technical University of Cartagena, 30202 Cartagena, Spain
Interests: applied autonomous sensor systems; real-time systems, high-performance computing, computational methods; uncertainty quantification

Special Issue Information

Dear Colleagues,

Nowadays, intelligent sensor systems (ISS) play a significant role in a wide spectrum of applications requiring integrated computation and communication capabilities. Such intelligent systems can make use of advanced signal processing techniques, data fusion techniques, intelligent algorithms, and artificial intelligence concepts which are of paramount importance in vehicles, be they autonomous or human-operated, in order to provide the necessary information satisfying the application requirements. The proper design of modern ISS requires efficient algorithms, software architectures and, sometimes, application-specific hardware architectures to better understand sensor data such that the different sources of information can be efficiently integrated for better feature extraction.

We are proud to announce this Special Issue entitled “Intelligent Sensing Systems in Vehicles”. It is an attempt to include the most relevant work on state-of-the-art ISS, highlighting the higher level of integration that allows for the realization of complex applications satisfying real-time constraints in many challenging applications. Original papers are solicited in subjects including, but not limited to, the following:

+ Feature extraction and sensor fusion;

+ Autonomous vehicle technologies using sensors;

+ Distributed software architectures for ISS;

+ Application-specific hardware architectures for ISS;

+ Application of ISS for aerial, ground, and underwater vehicles.

Dr. Humberto Martinez Barbera
Dr. David Herrero Pérez
Guest Editors

Manuscript Submission Information

Manuscripts should be submitted online at www.mdpi.com by registering and logging in to this website. Once you are registered, click here to go to the submission form. Manuscripts can be submitted until the deadline. All papers will be peer-reviewed. Accepted papers will be published continuously in the journal (as soon as accepted) and will be listed together on the special issue website. Research articles, review articles as well as short communications are invited. For planned papers, a title and short abstract (about 100 words) can be sent to the Editorial Office for announcement on this website.

Submitted manuscripts should not have been published previously, nor be under consideration for publication elsewhere (except conference proceedings papers). All manuscripts are thoroughly refereed through a single-blind peer-review process. A guide for authors and other relevant information for submission of manuscripts is available on the Instructions for Authors page. Sensors is an international peer-reviewed open access semimonthly journal published by MDPI.

Please visit the Instructions for Authors page before submitting a manuscript. The Article Processing Charge (APC) for publication in this open access journal is 2200 CHF (Swiss Francs). Submitted papers should be well formatted and use good English. Authors may use MDPI's English editing service prior to publication or during author revisions.

Keywords

  • multisensory tracking
  • multisensor fusion
  • situation assessment
  • perception
  • navigation
  • computer
  • vision-inspired solutions
  • deep learning for vehicle environment understanding
  • aerial, ground, and underwater vehicles
  • autonomous vehicles

Published Papers (6 papers)

Order results
Result details
Select all
Export citation of selected articles as:

Research

Jump to: Review

Open AccessArticle
Sensor Modeling for Underwater Localization Using a Particle Filter
Sensors 2021, 21(4), 1549; https://doi.org/10.3390/s21041549 (registering DOI) - 23 Feb 2021
Abstract
This paper presents a framework for processing, modeling, and fusing underwater sensor signals to provide a reliable perception for underwater localization in structured environments. Submerged sensory information is often affected by diverse sources of uncertainty that can deteriorate the positioning and tracking. By [...] Read more.
This paper presents a framework for processing, modeling, and fusing underwater sensor signals to provide a reliable perception for underwater localization in structured environments. Submerged sensory information is often affected by diverse sources of uncertainty that can deteriorate the positioning and tracking. By adopting uncertain modeling and multi-sensor fusion techniques, the framework can maintain a coherent representation of the environment, filtering outliers, inconsistencies in sequential observations, and useless information for positioning purposes. We evaluate the framework using cameras and range sensors for modeling uncertain features that represent the environment around the vehicle. We locate the underwater vehicle using a Sequential Monte Carlo (SMC) method initialized from the GPS location obtained on the surface. The experimental results show that the framework provides a reliable environment representation during the underwater navigation to the localization system in real-world scenarios. Besides, they evaluate the improvement of localization compared to the position estimation using reliable dead-reckoning systems. Full article
(This article belongs to the Special Issue Intelligent Sensing Systems for Vehicle)
Show Figures

Figure 1

Open AccessArticle
A Sensor Fusion Based Nonholonomic Wheeled Mobile Robot for Tracking Control
Sensors 2020, 20(24), 7055; https://doi.org/10.3390/s20247055 - 09 Dec 2020
Abstract
In this paper, a detail design procedure of the real-time trajectory tracking for the nonholonomic wheeled mobile robot (NWMR) is proposed. A 9-axis micro electro-mechanical systems (MEMS) inertial measurement unit (IMU) sensor is used to measure the posture of the NWMR, the position [...] Read more.
In this paper, a detail design procedure of the real-time trajectory tracking for the nonholonomic wheeled mobile robot (NWMR) is proposed. A 9-axis micro electro-mechanical systems (MEMS) inertial measurement unit (IMU) sensor is used to measure the posture of the NWMR, the position information of NWMR and the hand-held device are acquired by global positioning system (GPS) and then transmit via radio frequency (RF) module. In addition, in order to avoid the gimbal lock produced by the posture computation from Euler angles, the quaternion is utilized to compute the posture of the NWMR. Furthermore, the Kalman filter is used to filter out the readout noise of the GPS and calculate the position of NWMR and then track the object. The simulation results show the posture error between the NWMR and the hand-held device can converge to zero after 3.928 seconds for the dynamic tracking. Lastly, the experimental results show the validation and feasibility of the proposed results. Full article
(This article belongs to the Special Issue Intelligent Sensing Systems for Vehicle)
Show Figures

Figure 1

Open AccessArticle
Think Aloud Protocol Applied in Naturalistic Driving for Driving Rules Generation
Sensors 2020, 20(23), 6907; https://doi.org/10.3390/s20236907 - 03 Dec 2020
Abstract
Understanding naturalistic driving in complex scenarios is an important step towards autonomous driving, and several approaches have been adopted for modeling driver’s behaviors. This paper presents the methodology known as “Think Aloud Protocol” to model driving. This methodology is a data-gathering technique in [...] Read more.
Understanding naturalistic driving in complex scenarios is an important step towards autonomous driving, and several approaches have been adopted for modeling driver’s behaviors. This paper presents the methodology known as “Think Aloud Protocol” to model driving. This methodology is a data-gathering technique in which drivers are asked to verbalize their thoughts as they are driving which are then recorded, and the ensuing analysis of the audios and videos permits to derive driving rules. The goal of this paper is to show how think aloud methodology is applied in the naturalistic driving area, and to demonstrate the validity of the proposed approach to derive driving rules. The paper presents, firstly, the background of the think aloud methodology and then presents the application of this methodology to driving in roundabouts. The general deployment of this methodology consists of several stages: driver preparation, data collection, audio and video processing, generation of coded transcript files, and the generation of driving rules. The main finding of this study is that think aloud protocol can be applied to naturalistic driving, and even some potential limitations as discussed in the paper, the presented methodology is a relatively easy approach to derive driving rules. Full article
(This article belongs to the Special Issue Intelligent Sensing Systems for Vehicle)
Show Figures

Figure 1

Open AccessArticle
A Vision-Based Driver Assistance System with Forward Collision and Overtaking Detection
Sensors 2020, 20(18), 5139; https://doi.org/10.3390/s20185139 - 09 Sep 2020
Cited by 2
Abstract
One major concern in the development of intelligent vehicles is to improve the driving safety. It is also an essential issue for future autonomous driving and intelligent transportation. In this paper, we present a vision-based system for driving assistance. A front and a [...] Read more.
One major concern in the development of intelligent vehicles is to improve the driving safety. It is also an essential issue for future autonomous driving and intelligent transportation. In this paper, we present a vision-based system for driving assistance. A front and a rear on-board camera are adopted for visual sensing and environment perception. The purpose is to avoid potential traffic accidents due to forward collision and vehicle overtaking, and assist the drivers or self-driving cars to perform safe lane change operations. The proposed techniques consist of lane change detection, forward collision warning, and overtaking vehicle identification. A new cumulative density function (CDF)-based symmetry verification method is proposed for the detection of front vehicles. The motion cue obtained from optical flow is used for overtaking detection. It is further combined with a convolutional neural network to remove repetitive patterns for more accurate overtaking vehicle identification. Our approach is able to adapt to a variety of highway and urban scenarios under different illumination conditions. The experiments and performance evaluation carried out on real scene images have demonstrated the effectiveness of the proposed techniques. Full article
(This article belongs to the Special Issue Intelligent Sensing Systems for Vehicle)
Show Figures

Figure 1

Open AccessArticle
Road-Aware Trajectory Prediction for Autonomous Driving on Highways
Sensors 2020, 20(17), 4703; https://doi.org/10.3390/s20174703 - 20 Aug 2020
Abstract
For driving safely and comfortably, the long-term trajectory prediction of surrounding vehicles is essential for autonomous vehicles. For handling the uncertain nature of trajectory prediction, deep-learning-based approaches have been proposed previously. An on-road vehicle must obey road geometry, i.e., it should run within [...] Read more.
For driving safely and comfortably, the long-term trajectory prediction of surrounding vehicles is essential for autonomous vehicles. For handling the uncertain nature of trajectory prediction, deep-learning-based approaches have been proposed previously. An on-road vehicle must obey road geometry, i.e., it should run within the constraint of the road shape. Herein, we present a novel road-aware trajectory prediction method which leverages the use of high-definition maps with a deep learning network. We developed a data-efficient learning framework for the trajectory prediction network in the curvilinear coordinate system of the road and a lane assignment for the surrounding vehicles. Then, we proposed a novel output-constrained sequence-to-sequence trajectory prediction network to incorporate the structural constraints of the road. Our method uses these structural constraints as prior knowledge for the prediction network. It is not only used as an input to the trajectory prediction network, but is also included in the constrained loss function of the maneuver recognition network. Accordingly, the proposed method can predict a feasible and realistic intention of the driver and trajectory. Our method has been evaluated using a real traffic dataset, and the results thus obtained show that it is data-efficient and can predict reasonable trajectories at merging sections. Full article
(This article belongs to the Special Issue Intelligent Sensing Systems for Vehicle)
Show Figures

Figure 1

Review

Jump to: Research

Open AccessReview
Driver Fatigue Detection Systems Using Multi-Sensors, Smartphone, and Cloud-Based Computing Platforms: A Comparative Analysis
Sensors 2021, 21(1), 56; https://doi.org/10.3390/s21010056 - 24 Dec 2020
Abstract
Internet of things (IoT) cloud-based applications deliver advanced solutions for smart cities to decrease traffic accidents caused by driver fatigue while driving on the road. Environmental conditions or driver behavior can ultimately lead to serious roadside accidents. In recent years, the authors have [...] Read more.
Internet of things (IoT) cloud-based applications deliver advanced solutions for smart cities to decrease traffic accidents caused by driver fatigue while driving on the road. Environmental conditions or driver behavior can ultimately lead to serious roadside accidents. In recent years, the authors have developed many low-cost, computerized, driver fatigue detection systems (DFDs) to help drivers, by using multi-sensors, and mobile and cloud-based computing architecture. To promote safe driving, these are the most current emerging platforms that were introduced in the past. In this paper, we reviewed state-of-the-art approaches for predicting unsafe driving styles using three common IoT-based architectures. The novelty of this article is to show major differences among multi-sensors, smartphone-based, and cloud-based architectures in multimodal feature processing. We discussed all of the problems that machine learning techniques faced in recent years, particularly the deep learning (DL) model, to predict driver hypovigilance, especially in terms of these three IoT-based architectures. Moreover, we performed state-of-the-art comparisons by using driving simulators to incorporate multimodal features of the driver. We also mention online data sources in this article to test and train network architecture in the field of DFDs on public available multimodal datasets. These comparisons assist other authors to continue future research in this domain. To evaluate the performance, we mention the major problems in these three architectures to help researchers use the best IoT-based architecture for detecting DFDs in a real-time environment. Moreover, the important factors of Multi-Access Edge Computing (MEC) and 5th generation (5G) networks are analyzed in the context of deep learning architecture to improve the response time of DFD systems. Lastly, it is concluded that there is a research gap when it comes to implementing the DFD systems on MEC and 5G technologies by using multimodal features and DL architecture. Full article
(This article belongs to the Special Issue Intelligent Sensing Systems for Vehicle)
Show Figures

Figure 1

Back to TopTop