sensors-logo

Journal Browser

Journal Browser

Special Issue "Sensors Applications in Intelligent Vehicle"

A special issue of Sensors (ISSN 1424-8220). This special issue belongs to the section "Intelligent Sensors".

Deadline for manuscript submissions: closed (15 December 2018).

Special Issue Editor

Prof. Dr. ByoungChul Ko
Website
Guest Editor
Dept. of Computer Engineering, Shindang-Dong, Dalseo-Gu, Daegu, Keimyung Univ. 704-701, Korea
Interests: fire and smoke detection; advanced driver assistant system; human detection and tracking; analysis of remote sensing images; human action recognition; medical image processing
Special Issues and Collections in MDPI journals

Special Issue Information

Dear Colleagues.

An intelligent vehicle (IV) is an autonomous driving vehicle that is capable of sensing its environment and navigating without the driver’s input. Most IV technologies depend on multiple input values, such as those generated by a camera sensor, infrared sensor, LiDAR, radar, and ultrasonic sensor. Sensor performance has been evolving and becoming smaller and less expensive, and sensor configuration architecture is becoming ever more complicated. Therefore, more comprehensive and detailed research is needed to the increase abilities of IVs. The purpose of this Special Issue is to take the opportunity to introduce current developments of sensors related to IV technologies and innovative sensor fusion techniques combined with computer vision, sensor networks, machine learning, and artificial intelligence, including deep learning. In this Special Issue, you are invited to submit contributions of original research, advancement, developments, and experiments pertaining to IV combined with sensors. Therefore, the Special Issue welcomes newly developed methods and ideas combining the data obtained from various sensors in the following fields (but not limited to these fields):

  • Autonomous vehicle technologies using sensors
  • Sensor fusion techniques for IVs
  • Wireless sensor networks and communication in intelligent transportation systems (ITS) and connected vehicles
  • Interaction of autonomous systems and drivers
  • State-of-the-art reviews of sensors for autonomous vehicles and IVs
  • Decision algorithms for autonomous driving
  • Driver state monitoring
  • Pedestrian detection and tracking
  • Advanced driver assistant systems (ADAS)
Prof. Dr. Byoung Chul Ko
Guest Editor

Manuscript Submission Information

Manuscripts should be submitted online at www.mdpi.com by registering and logging in to this website. Once you are registered, click here to go to the submission form. Manuscripts can be submitted until the deadline. All papers will be peer-reviewed. Accepted papers will be published continuously in the journal (as soon as accepted) and will be listed together on the special issue website. Research articles, review articles as well as short communications are invited. For planned papers, a title and short abstract (about 100 words) can be sent to the Editorial Office for announcement on this website.

Submitted manuscripts should not have been published previously, nor be under consideration for publication elsewhere (except conference proceedings papers). All manuscripts are thoroughly refereed through a single-blind peer-review process. A guide for authors and other relevant information for submission of manuscripts is available on the Instructions for Authors page. Sensors is an international peer-reviewed open access semimonthly journal published by MDPI.

Please visit the Instructions for Authors page before submitting a manuscript. The Article Processing Charge (APC) for publication in this open access journal is 2000 CHF (Swiss Francs). Submitted papers should be well formatted and use good English. Authors may use MDPI's English editing service prior to publication or during author revisions.

Keywords

  • Intelligent vehicles
  • Sensors
  • Sensor networks
  • Sensor fusion
  • Advanced driver assistant systems
  • Intelligent transportation systems
  • Decision algorithms

Published Papers (16 papers)

Order results
Result details
Select all
Export citation of selected articles as:

Research

Open AccessArticle
Track-Before-Detect Framework-Based Vehicle Monocular Vision Sensors
Sensors 2019, 19(3), 560; https://doi.org/10.3390/s19030560 - 29 Jan 2019
Cited by 2
Abstract
This paper proposes a Track-before-Detect framework for a multibody motion segmentation (named TbD-SfM). Our contribution relies on a tightly coupled tracking before detection strategy intended to reduce the complexity of existing Multibody Structure from Motion approaches. Efforts were done towards an algorithm variant [...] Read more.
This paper proposes a Track-before-Detect framework for a multibody motion segmentation (named TbD-SfM). Our contribution relies on a tightly coupled tracking before detection strategy intended to reduce the complexity of existing Multibody Structure from Motion approaches. Efforts were done towards an algorithm variant closer and aimed to a further embedded implementation for dynamic scene analysis while enhancing processing time performances. This generic motion segmentation approach can be transposed to several transportation sensor systems since no constraints are considered on segmented motions (6-DOF model). The tracking scheme is analyzed and its performance is evaluated under thorough experimental conditions including full-scale driving scenarios from known and available datasets. Results on challenging scenarios including the presence of multiple and simultaneous moving objects observed from a moving camera are reported and discussed. Full article
(This article belongs to the Special Issue Sensors Applications in Intelligent Vehicle)
Show Figures

Figure 1

Open AccessArticle
Generalized Parking Occupancy Analysis Based on Dilated Convolutional Neural Network
Sensors 2019, 19(2), 277; https://doi.org/10.3390/s19020277 - 11 Jan 2019
Cited by 6
Abstract
The importance of vacant parking space detection systems is increasing dramatically as the avoidance of traffic congestion and the time-consuming process of searching an empty parking space is a crucial problem for drivers in urban centers. However, the existing parking space occupancy detection [...] Read more.
The importance of vacant parking space detection systems is increasing dramatically as the avoidance of traffic congestion and the time-consuming process of searching an empty parking space is a crucial problem for drivers in urban centers. However, the existing parking space occupancy detection systems are either hardware expensive or not well-generalized for varying images captured from different camera views. As a solution, we take advantage of an affordable visual detection method that is made possible by the fact that camera monitoring is already available in the majority of parking areas. However, the current problem is a challenging vision task because of outdoor lighting variation, perspective distortion, occlusions, different camera viewpoints, and the changes due to the various seasons of the year. To overcome these obstacles, we propose an approach based on Dilated Convolutional Neural Network specifically designed for detecting parking space occupancy in a parking lot, given only an image of a single parking spot as input. To evaluate our method and allow its comparison with previous strategies, we trained and tested it on well-known publicly available datasets, PKLot and CNRPark + EXT. In these datasets, the parking lot images are already labeled, and therefore, we did not need to label them manually. The proposed method shows more reliability than prior works especially when we test it on a completely different subset of images. Considering that in previous studies the performance of the methods was compared with well-known architecture—AlexNet, which shows a highly promising achievement, we also assessed our model in comparison with AlexNet. Our investigations showed that, in comparison with previous approaches, for the task of classifying given parking spaces as vacant or occupied, the proposed approach is more robust, stable, and well-generalized for unseen images captured from completely different camera viewpoints, which has strong indications that it would generalize effectively to other parking lots. Full article
(This article belongs to the Special Issue Sensors Applications in Intelligent Vehicle)
Show Figures

Figure 1

Open AccessArticle
Visual Semantic Landmark-Based Robust Mapping and Localization for Autonomous Indoor Parking
Sensors 2019, 19(1), 161; https://doi.org/10.3390/s19010161 - 04 Jan 2019
Cited by 5
Abstract
Autonomous parking in an indoor parking lot without human intervention is one of the most demanded and challenging tasks of autonomous driving systems. The key to this task is precise real-time indoor localization. However, state-of-the-art low-level visual feature-based simultaneous localization and mapping systems [...] Read more.
Autonomous parking in an indoor parking lot without human intervention is one of the most demanded and challenging tasks of autonomous driving systems. The key to this task is precise real-time indoor localization. However, state-of-the-art low-level visual feature-based simultaneous localization and mapping systems (VSLAM) suffer in monotonous or texture-less scenes and under poor illumination or dynamic conditions. Additionally, low-level feature-based mapping results are hard for human beings to use directly. In this paper, we propose a semantic landmark-based robust VSLAM for real-time localization of autonomous vehicles in indoor parking lots. The parking slots are extracted as meaningful landmarks and enriched with confidence levels. We then propose a robust optimization framework to solve the aliasing problem of semantic landmarks by dynamically eliminating suboptimal constraints in the pose graph and correcting erroneous parking slots associations. As a result, a semantic map of the parking lot, which can be used by both autonomous driving systems and human beings, is established automatically and robustly. We evaluated the real-time localization performance using multiple autonomous vehicles, and an repeatability of 0.3 m track tracing was achieved at a 10 kph of autonomous driving. Full article
(This article belongs to the Special Issue Sensors Applications in Intelligent Vehicle)
Show Figures

Figure 1

Open AccessArticle
Intention Estimation Using Set of Reference Trajectories as Behaviour Model
Sensors 2018, 18(12), 4423; https://doi.org/10.3390/s18124423 - 14 Dec 2018
Cited by 2
Abstract
Autonomous robotic systems operating in the vicinity of other agents, such as humans, manually driven vehicles and other robots, can model the behaviour and estimate intentions of the other agents to enhance efficiency of their operation, while preserving safety. We propose a data-driven [...] Read more.
Autonomous robotic systems operating in the vicinity of other agents, such as humans, manually driven vehicles and other robots, can model the behaviour and estimate intentions of the other agents to enhance efficiency of their operation, while preserving safety. We propose a data-driven approach to model the behaviour of other agents, which is based on a set of trajectories navigated by other agents. Then, to evaluate the proposed behaviour modelling approach, we propose and compare two methods for agent intention estimation based on: (i) particle filtering; and (ii) decision trees. The proposed methods were validated using three datasets that consist of real-world bicycle and car trajectories in two different scenarios, at a roundabout and at a t-junction with a pedestrian crossing. The results validate the utility of the data-driven behaviour model, and show that decision-tree based intention estimation works better on a binary-class problem, whereas the particle-filter based technique performs better on a multi-class problem, such as the roundabout, where the method yielded an average gain of 14.88 m for correct intention estimation locations compared to the decision-tree based method. Full article
(This article belongs to the Special Issue Sensors Applications in Intelligent Vehicle)
Show Figures

Figure 1

Open AccessArticle
Connection of the SUMO Microscopic Traffic Simulator and the Unity 3D Game Engine to Evaluate V2X Communication-Based Systems
Sensors 2018, 18(12), 4399; https://doi.org/10.3390/s18124399 - 12 Dec 2018
Cited by 9
Abstract
In-vehicle applications that are based on Vehicle-to-Everything (V2X) communication technologies need to be evaluated under lab-controlled conditions before performing field tests. The need for a tailored platform to perform specific research on the cooperative Advanced Driving Assistance System (ADAS) to assess the effect [...] Read more.
In-vehicle applications that are based on Vehicle-to-Everything (V2X) communication technologies need to be evaluated under lab-controlled conditions before performing field tests. The need for a tailored platform to perform specific research on the cooperative Advanced Driving Assistance System (ADAS) to assess the effect on driver behavior and driving performance motivated the development of a driver-centric traffic simulator that is built over a 3D graphics engine. The engine creates a driving situation as it communicates with a traffic simulator as a means to simulate real-life traffic scenarios. The TraCI as a Service (TraaS) library was implemented to perform the interaction between the driver-controlled vehicle and the Simulation of Urban MObility (SUMO). An extension of a previous version, this work improves simulation performance and realism by reducing computational demand and integrating a tailored scenario with the ADAS to be tested. The usability of the implemented simulation platform was evaluated by means of an experiment related to the efficiency of a Traffic Light Assistant (TLA), showing the analysis of the answer that 80% of the participants were satisfied with the simulator and the TLA system implemented. Full article
(This article belongs to the Special Issue Sensors Applications in Intelligent Vehicle)
Show Figures

Figure 1

Open AccessArticle
Lane Endpoint Detection and Position Accuracy Evaluation for Sensor Fusion-Based Vehicle Localization on Highways
Sensors 2018, 18(12), 4389; https://doi.org/10.3390/s18124389 - 11 Dec 2018
Cited by 5
Abstract
Landmark-based vehicle localization is a key component of both autonomous driving and advanced driver assistance systems (ADAS). Previously used landmarks in highways such as lane markings lack information on longitudinal positions. To address this problem, lane endpoints can be used as landmarks. This [...] Read more.
Landmark-based vehicle localization is a key component of both autonomous driving and advanced driver assistance systems (ADAS). Previously used landmarks in highways such as lane markings lack information on longitudinal positions. To address this problem, lane endpoints can be used as landmarks. This paper proposes two essential components when using lane endpoints as landmarks: lane endpoint detection and its accuracy evaluation. First, it proposes a method to efficiently detect lane endpoints using a monocular forward-looking camera, which is the most widely installed perception sensor. Lane endpoints are detected with a small amount of computation based on the following steps: lane detection, lane endpoint candidate generation, and lane endpoint candidate verification. Second, it proposes a method to reliably measure the position accuracy of the lane endpoints detected from images taken while the camera is moving at high speed. A camera is installed with a mobile mapping system (MMS) in a vehicle, and the position accuracy of the lane endpoints detected by the camera is measured by comparing their positions with ground truths obtained by the MMS. In the experiment, the proposed methods were evaluated and compared with previous methods based on a dataset acquired while driving on 80 km of highway in both daytime and nighttime. Full article
(This article belongs to the Special Issue Sensors Applications in Intelligent Vehicle)
Show Figures

Figure 1

Open AccessArticle
Driver’s Facial Expression Recognition in Real-Time for Safe Driving
Sensors 2018, 18(12), 4270; https://doi.org/10.3390/s18124270 - 04 Dec 2018
Cited by 12
Abstract
In recent years, researchers of deep neural networks (DNNs)-based facial expression recognition (FER) have reported results showing that these approaches overcome the limitations of conventional machine learning-based FER approaches. However, as DNN-based FER approaches require an excessive amount of memory and incur high [...] Read more.
In recent years, researchers of deep neural networks (DNNs)-based facial expression recognition (FER) have reported results showing that these approaches overcome the limitations of conventional machine learning-based FER approaches. However, as DNN-based FER approaches require an excessive amount of memory and incur high processing costs, their application in various fields is very limited and depends on the hardware specifications. In this paper, we propose a fast FER algorithm for monitoring a driver’s emotions that is capable of operating in low specification devices installed in vehicles. For this purpose, a hierarchical weighted random forest (WRF) classifier that is trained based on the similarity of sample data, in order to improve its accuracy, is employed. In the first step, facial landmarks are detected from input images and geometric features are extracted, considering the spatial position between landmarks. These feature vectors are then implemented in the proposed hierarchical WRF classifier to classify facial expressions. Our method was evaluated experimentally using three databases, extended Cohn-Kanade database (CK+), MMI and the Keimyung University Facial Expression of Drivers (KMU-FED) database, and its performance was compared with that of state-of-the-art methods. The results show that our proposed method yields a performance similar to that of deep learning FER methods as 92.6% for CK+ and 76.7% for MMI, with a significantly reduced processing cost approximately 3731 times less than that of the DNN method. These results confirm that the proposed method is optimized for real-time embedded applications having limited computing resources. Full article
(This article belongs to the Special Issue Sensors Applications in Intelligent Vehicle)
Show Figures

Figure 1

Open AccessArticle
Cloud Update of Tiled Evidential Occupancy Grid Maps for the Multi-Vehicle Mapping
Sensors 2018, 18(12), 4119; https://doi.org/10.3390/s18124119 - 23 Nov 2018
Cited by 3
Abstract
Nowadays, many intelligent vehicles are equipped with various sensors to recognize their surrounding environment and to measure the motion or position of the vehicle. In addition, the number of intelligent vehicles equipped with a mobile Internet modem is increasing. Based on the sensors [...] Read more.
Nowadays, many intelligent vehicles are equipped with various sensors to recognize their surrounding environment and to measure the motion or position of the vehicle. In addition, the number of intelligent vehicles equipped with a mobile Internet modem is increasing. Based on the sensors and Internet connection, the intelligent vehicles are able to share the sensor information with other vehicles via a cloud service. The sensor information sharing via the cloud service promises to improve the safe and efficient operation of the multiple intelligent vehicles. This paper presents a cloud update framework of occupancy grid maps for multiple intelligent vehicles in a large-scale environment. An evidential theory is applied to create the occupancy grid maps to address sensor disturbance such as measurement noise, occlusion and dynamic objects. Multiple vehicles equipped with LiDARs, motion sensors, and a low-cost GPS receiver create the evidential occupancy grid map (EOGM) for their passing trajectory based on GraphSLAM. A geodetic quad-tree tile system is applied to manage the EOGM, which provides a common tiling format to cover the large-scale environment. The created EOGM tiles are uploaded to EOGM cloud and merged with old EOGM tiles in the cloud using Dempster combination of evidential theory. Experiments were performed to evaluate the multiple EOGM mapping and the cloud update framework for large-scale road environment. Full article
(This article belongs to the Special Issue Sensors Applications in Intelligent Vehicle)
Show Figures

Figure 1

Open AccessArticle
Real-Life Implementation of a GPS-Based Path-Following System for an Autonomous Vehicle
Sensors 2018, 18(11), 3940; https://doi.org/10.3390/s18113940 - 14 Nov 2018
Cited by 3
Abstract
This work is meant to report on activities at TU Delft on the design and implementation of a path-following system for an autonomous Toyota Prius. The design encompasses: finding the vehicle parameters for the actual vehicle to be used for control design; lateral [...] Read more.
This work is meant to report on activities at TU Delft on the design and implementation of a path-following system for an autonomous Toyota Prius. The design encompasses: finding the vehicle parameters for the actual vehicle to be used for control design; lateral and longitudinal controllers for steering and acceleration, respectively. The implementation covers the real-time aspects via LabVIEW from National Instruments and the real-life tests. The deployment of the system was enabled by a Spatial Dual Global Positioning System (GPS) system providing more accuracy than the regular GPS. The results discussed in this work represent the first autonomous tests on the Toyota Prius at TU Delft, and we expect the proposed system to be a benchmark against which to test more advanced solutions. The tests show that the system is able to perform in real-time while satisfying comfort and trajectory tracking requirements: in particular, the tracking error was within 16 cm, which is compatible with the 13 cm precision of the Spatial Dual GPS, whereas the longitudinal and lateral acceleration are within comfort levels as defined by available experimental studies. Full article
(This article belongs to the Special Issue Sensors Applications in Intelligent Vehicle)
Show Figures

Figure 1

Open AccessArticle
Performance Analysis of NDT-based Graph SLAM for Autonomous Vehicle in Diverse Typical Driving Scenarios of Hong Kong
Sensors 2018, 18(11), 3928; https://doi.org/10.3390/s18113928 - 14 Nov 2018
Cited by 10
Abstract
Robust and lane-level positioning is essential for autonomous vehicles. As an irreplaceable sensor, Light detection and ranging (LiDAR) can provide continuous and high-frequency pose estimation by means of mapping, on condition that enough environment features are available. The error of mapping can accumulate [...] Read more.
Robust and lane-level positioning is essential for autonomous vehicles. As an irreplaceable sensor, Light detection and ranging (LiDAR) can provide continuous and high-frequency pose estimation by means of mapping, on condition that enough environment features are available. The error of mapping can accumulate over time. Therefore, LiDAR is usually integrated with other sensors. In diverse urban scenarios, the environment feature availability relies heavily on the traffic (moving and static objects) and the degree of urbanization. Common LiDAR-based simultaneous localization and mapping (SLAM) demonstrations tend to be studied in light traffic and less urbanized area. However, its performance can be severely challenged in deep urbanized cities, such as Hong Kong, Tokyo, and New York with dense traffic and tall buildings. This paper proposes to analyze the performance of standalone NDT-based graph SLAM and its reliability estimation in diverse urban scenarios to further evaluate the relationship between the performance of LiDAR-based SLAM and scenario conditions. The normal distribution transform (NDT) is employed to calculate the transformation between frames of point clouds. Then, the LiDAR odometry is performed based on the calculated continuous transformation. The state-of-the-art graph-based optimization is used to integrate the LiDAR odometry measurements to implement optimization. The 3D building models are generated and the definition of the degree of urbanization based on Skyplot is proposed. Experiments are implemented in different scenarios with different degrees of urbanization and traffic conditions. The results show that the performance of the LiDAR-based SLAM using NDT is strongly related to the traffic condition and degree of urbanization. The best performance is achieved in the sparse area with normal traffic and the worse performance is obtained in dense urban area with 3D positioning error (summation of horizontal and vertical) gradients of 0.024 m/s and 0.189 m/s, respectively. The analyzed results can be a comprehensive benchmark for evaluating the performance of standalone NDT-based graph SLAM in diverse scenarios which is significant for multi-sensor fusion of autonomous vehicle. Full article
(This article belongs to the Special Issue Sensors Applications in Intelligent Vehicle)
Show Figures

Figure 1

Open AccessArticle
Road Scene Simulation Based on Vehicle Sensors: An Intelligent Framework Using Random Walk Detection and Scene Stage Reconstruction
Sensors 2018, 18(11), 3782; https://doi.org/10.3390/s18113782 - 05 Nov 2018
Abstract
Road scene model construction is an important aspect of intelligent transportation system research. This paper proposes an intelligent framework that can automatically construct road scene models from image sequences. The road and foreground regions are detected at superpixel level via a new kind [...] Read more.
Road scene model construction is an important aspect of intelligent transportation system research. This paper proposes an intelligent framework that can automatically construct road scene models from image sequences. The road and foreground regions are detected at superpixel level via a new kind of random walk algorithm. The seeds for different regions are initialized by trapezoids that are propagated from adjacent frames using optical flow information. The superpixel level region detection is implemented by the random walk algorithm, which is then refined by a fast two-cycle level set method. After this, scene stages can be specified according to a graph model of traffic elements. These then form the basis of 3D road scene models. Each technical component of the framework was evaluated and the results confirmed the effectiveness of the proposed approach. Full article
(This article belongs to the Special Issue Sensors Applications in Intelligent Vehicle)
Show Figures

Figure 1

Open AccessArticle
FAST Pre-Filtering-Based Real Time Road Sign Detection for Low-Cost Vehicle Localization
Sensors 2018, 18(10), 3590; https://doi.org/10.3390/s18103590 - 22 Oct 2018
Cited by 4
Abstract
In order to overcome the limitations of GNSS/INS and to keep the cost affordable for mass-produced vehicles, a precise localization system fusing the estimated vehicle positions from low-cost GNSS/INS and low-cost perception sensors is being developed. For vehicle position estimation, a perception sensor [...] Read more.
In order to overcome the limitations of GNSS/INS and to keep the cost affordable for mass-produced vehicles, a precise localization system fusing the estimated vehicle positions from low-cost GNSS/INS and low-cost perception sensors is being developed. For vehicle position estimation, a perception sensor detects a road facility and uses it as a landmark. For this localization system, this paper proposes a method to detect a road sign as a landmark using a monocular camera whose cost is relatively low compared to other perception sensors. Since the inside pattern and aspect ratio of a road sign are various, the proposed method is based on the part-based approach that detects corners and combines them to detect a road sign. While the recall, precision, and processing time of the state of the art detector based on a convolutional neural network are 99.63%, 98.16%, and 4802 ms respectively, the recall, precision, and processing time of the proposed method are 97.48%, 98.78%, and 66.7 ms, respectively. The detection performance of the proposed method is as good as that of the state of the art detector and its processing time is drastically reduced to be applicable for an embedded system. Full article
(This article belongs to the Special Issue Sensors Applications in Intelligent Vehicle)
Show Figures

Figure 1

Open AccessArticle
Dual-Bayes Localization Filter Extension for Safeguarding in the Case of Uncertain Direction Signals
Sensors 2018, 18(10), 3539; https://doi.org/10.3390/s18103539 - 19 Oct 2018
Cited by 1
Abstract
In order to run a localization filter for parking systems in real time, the directional information must be directly available when a distance measurement of the wheel speed sensor is detected. When the vehicle is launching, the wheel speed sensors may already detect [...] Read more.
In order to run a localization filter for parking systems in real time, the directional information must be directly available when a distance measurement of the wheel speed sensor is detected. When the vehicle is launching, the wheel speed sensors may already detect distance measurement in the form of Delta-Wheel-Pulse-Counts (DWPCs) without having defined a rolling direction. This phenomenon is particularly problematic during parking maneuvers, where many small correction strokes are made. If a localization filter is used for positioning, the restrained DWPCs cannot process in real time. Without directional information in the form of a rolling direction signal, the filter has to ignore the DWPCs or artificially stop until a rolling direction signal is present. For this reason, methods for earlier estimation of the rolling direction based on the pattern of the incoming DWPCs and based on the force equilibrium have been presented. Since the new methods still have their weaknesses and a wrong estimation of the rolling direction can occur, an extension of a so-called Dual-Localization filter approach is presented. The Dual-Localization filter uses two localization filters and an intelligent initialization logic that ensures that both filters move in opposite directions at launching. The primary localization filter uses the estimated and the secondary one the opposite direction. As soon as a valid rolling direction signal is present, an initialization logic is used to decide which localization filter has previously moved in the true direction. The localization filter that has moved in the wrong direction is initialized with the states and covariances of the other localization filter. This extension allows for a fast and real-time capability to be achieved, and the accumulated velocity error can be dramatically reduced. Full article
(This article belongs to the Special Issue Sensors Applications in Intelligent Vehicle)
Show Figures

Figure 1

Open AccessArticle
IMU-Based Virtual Road Profile Sensor for Vehicle Localization
Sensors 2018, 18(10), 3344; https://doi.org/10.3390/s18103344 - 07 Oct 2018
Cited by 3
Abstract
A road profile can be a good reference feature for vehicle localization when a Global Positioning System signal is unavailable. However, cost effective and compact devices measuring road profiles are not available for production vehicles. This paper presents a longitudinal road profile estimation [...] Read more.
A road profile can be a good reference feature for vehicle localization when a Global Positioning System signal is unavailable. However, cost effective and compact devices measuring road profiles are not available for production vehicles. This paper presents a longitudinal road profile estimation method as a virtual sensor for vehicle localization without using bulky and expensive sensor systems. An inertial measurement unit installed in the vehicle provides filtered signals of the vehicle’s responses to the longitudinal road profile. A disturbance observer was designed to extract the characteristic features of the road profile from the signals measured by the inertial measurement unit. Design synthesis based on a Kalman filter was used for the observer design. A nonlinear damper is explicitly considered to improve the estimation accuracy. Virtual measurement signals are introduced for observability. The suggested methodology estimates the road profile that is sufficiently accurate for localization. Based on the estimated longitudinal road profile, we generated spectrogram plots as the features for localization. The localization is realized by matching the spectrogram plot with pre-indexed plots. The localization using the estimated road profile shows a few meters accuracy, suggesting a possible road profile estimation method as an alternative sensor for vehicle localization. Full article
(This article belongs to the Special Issue Sensors Applications in Intelligent Vehicle)
Show Figures

Figure 1

Open AccessArticle
Modeling and Performance of the IEEE 802.11p Broadcasting for Intra-Platoon Communication
Sensors 2018, 18(9), 2971; https://doi.org/10.3390/s18092971 - 06 Sep 2018
Cited by 3
Abstract
Road capacity, traffic safety, and energy efficiency can be extremely improved by forming platoons with a small intra-vehicle spacing. Automated controllers obtain vehicle speed, acceleration, and position through vehicular ad hoc networks (VANETs), which allows the performance of platoon communication to make a [...] Read more.
Road capacity, traffic safety, and energy efficiency can be extremely improved by forming platoons with a small intra-vehicle spacing. Automated controllers obtain vehicle speed, acceleration, and position through vehicular ad hoc networks (VANETs), which allows the performance of platoon communication to make a significant impact on the stability of the platoon. To the best of our knowledge, there is not much research relating to packet delay and packet dropping rate of platoon communication based on the IEEE 802.11p broadcasting. In this paper, we introduce platoon structure model, vehicle control model, and communication model for a single platoon scenario. By utilizing Markov process and M/G/1/K queuing theory, we put forward an analytical model to assess the property of intra-vehicle communication. The analytical model is validated by simulations and the influence of communication parameters on intra-vehicle communication performance are discussed. In addition, the experimental results demonstrate that the IEEE 802.11p-based intra-vehicle communication guarantee the stability of platoon. Full article
(This article belongs to the Special Issue Sensors Applications in Intelligent Vehicle)
Show Figures

Figure 1

Open AccessArticle
An Extended Kalman Filter and Back Propagation Neural Network Algorithm Positioning Method Based on Anti-lock Brake Sensor and Global Navigation Satellite System Information
Sensors 2018, 18(9), 2753; https://doi.org/10.3390/s18092753 - 21 Aug 2018
Cited by 4
Abstract
Telematics box (T-Box) chip-level Global Navigation Satellite System (GNSS) receiver modules usually suffer from GNSS information failure or noise in urban environments. In order to resolve this issue, this paper presents a real-time positioning method for Extended Kalman Filter (EKF) and Back Propagation [...] Read more.
Telematics box (T-Box) chip-level Global Navigation Satellite System (GNSS) receiver modules usually suffer from GNSS information failure or noise in urban environments. In order to resolve this issue, this paper presents a real-time positioning method for Extended Kalman Filter (EKF) and Back Propagation Neural Network (BPNN) algorithms based on Antilock Brake System (ABS) sensor and GNSS information. Experiments were performed using an assembly in the vehicle with a T-Box. The T-Box firstly use automotive kinematical Pre-EKF to fuse the four wheel speed, yaw rate and steering wheel angle data from the ABS sensor to obtain a more accurate vehicle speed and heading angle velocity. In order to reduce the noise of the GNSS information, After-EKF fusion vehicle speed, heading angle velocity and GNSS data were used and low-noise positioning data were obtained. The heading angle speed error is extracted as target and part of low-noise positioning data were used as input for training a BPNN model. When the positioning is invalid, the well-trained BPNN corrected heading angle velocity output and vehicle speed add the synthesized relative displacement to the previous absolute position to realize a new position. With the data of high-precision real-time kinematic differential positioning equipment as the reference, the use of the dual EKF can reduce the noise range of GNSS information and concentrate good-positioning signals of the road within 5 m (i.e. the positioning status is valid). When the GNSS information was shielded (making the positioning status invalid), and the previous data was regarded as a training sample, it is found that the vehicle achieved 15 minutes position without GNSS information on the recycling line. The results indicated this new position method can reduce the vehicle positioning noise when GNSS information is valid and determine the position during long periods of invalid GNSS information. Full article
(This article belongs to the Special Issue Sensors Applications in Intelligent Vehicle)
Show Figures

Figure 1

Back to TopTop