sensors-logo

Journal Browser

Journal Browser

Special Issue "Sensors and Sensor's Fusion in Autonomous Vehicles"

A special issue of Sensors (ISSN 1424-8220). This special issue belongs to the section "Remote Sensors".

Deadline for manuscript submissions: 31 December 2020.

Special Issue Editors

Prof. Dr. Andrzej Stateczny
Website SciProfiles
Guest Editor
Department of Geodesy, Faculty of Civil and Environmental Engineering, Gdansk Technical University, Narutowicza St. 11/12, Gdansk, Poland
Interests: radar navigation; comparative (terrain reference) navigation; multi-sensor data fusion; automotive navigation; radar and sonar target detection and tracking; sonar imaging and understanding; MBES bathymetry; autonomous navigation; artificial intelligence for navigation; deep learning; geoinformatics, underwater navigation
Special Issues and Collections in MDPI journals
Dr. Marta Wlodarczyk-Sielicka
Website
Guest Editor
Department of Navigation, Maritime University of Szczecin, Waly Chrobrego 1-2, 70-500 Szczecin, Poland
Interests: spatial big data; spatial analysis; artificial neural networks; deep learning; data fusion; processing of bathymetric data; sea bottom modeling; data reduction
Special Issues and Collections in MDPI journals
Dr. Pawel Burdziakowski
Website SciProfiles
Guest Editor
Department of Geodesy, Faculty of Civil and Environmental Engineering, Gdansk Technical University, Narutowicza St. 11/12, Gdansk, Poland
Interests: unmanned aerial vehicles indoor navigation; autonomous navigation and algorithms; non-GNSS navigation; photogrammetry; real-time photogrammetry; UAV technology; computer vision
Special Issues and Collections in MDPI journals

Special Issue Information

Dear Collegaues,

This Special Issue seeks the submission of review and original research articles related to sensors and sensor fusion in autonomous vehicles. Autonomous vehicle navigation has been at the centre of several major developments, both in civilian and defence applications. New technologies like multisensory data fusion, big data processing, and deep learning are changing the quality of areas of applications, improving sensors and systems used. New ideas like 3D radar, 3D sonar, LIDAR, and others are based on autonomous vehicle revolutionary development.

The Special Issue is open to contributions dealing with many aspects of autonomous vehicle sensors and their fusion, like autonomous navigation, multi-sensor fusion, big data processing for autonomous vehicle navigation, sensors related to science/research, algorithms/technical development, analysis tools, synergy with sensors in navigation, and artificial intelligence methods for autonomous vehicle navigation.

Prof. Dr. Andrzej Stateczny
Dr. Marta Wlodarczyk-Sielicka
Dr. Pawel Burdziakowski
Guest Editors

Manuscript Submission Information

Manuscripts should be submitted online at www.mdpi.com by registering and logging in to this website. Once you are registered, click here to go to the submission form. Manuscripts can be submitted until the deadline. All papers will be peer-reviewed. Accepted papers will be published continuously in the journal (as soon as accepted) and will be listed together on the special issue website. Research articles, review articles as well as short communications are invited. For planned papers, a title and short abstract (about 100 words) can be sent to the Editorial Office for announcement on this website.

Submitted manuscripts should not have been published previously, nor be under consideration for publication elsewhere (except conference proceedings papers). All manuscripts are thoroughly refereed through a single-blind peer-review process. A guide for authors and other relevant information for submission of manuscripts is available on the Instructions for Authors page. Sensors is an international peer-reviewed open access semimonthly journal published by MDPI.

Please visit the Instructions for Authors page before submitting a manuscript. The Article Processing Charge (APC) for publication in this open access journal is 2000 CHF (Swiss Francs). Submitted papers should be well formatted and use good English. Authors may use MDPI's English editing service prior to publication or during author revisions.

Keywords

  • Sensor’s fusion;
  • Sensors based on autonomous navigation;
  • Comparative (terrain reference) navigation;
  • 3D radar and 3D sonar;
  • Gravity and geomagnetic sensors;
  • LiDAR;
  • Artificial intelligence in autonomous vehicles;
  • Big data processing;
  • Close-range photogrammetry and computer vision methods;
  • Deep learning algorithms;
  • Fusion of spatial data;
  • Processing of sensors data.

Published Papers (7 papers)

Order results
Result details
Select all
Export citation of selected articles as:

Research

Open AccessArticle
Simultaneous Estimation of Vehicle Roll and Sideslip Angles through a Deep Learning Approach
Sensors 2020, 20(13), 3679; https://doi.org/10.3390/s20133679 - 30 Jun 2020
Abstract
Presently, autonomous vehicles are on the rise and are expected to be on the roads in the coming years. In this sense, it becomes necessary to have adequate knowledge about its states to design controllers capable of providing adequate performance in all driving [...] Read more.
Presently, autonomous vehicles are on the rise and are expected to be on the roads in the coming years. In this sense, it becomes necessary to have adequate knowledge about its states to design controllers capable of providing adequate performance in all driving scenarios. Sideslip and roll angles are critical parameters in vehicular lateral stability. The later has a high impact on vehicles with an elevated center of gravity, such as trucks, buses, and industrial vehicles, among others, as they are prone to rollover. Due to the high cost of the current sensors used to measure these angles directly, much of the research is focused on estimating them. One of the drawbacks is that vehicles are strong non-linear systems that require specific methods able to tackle this feature. The evolution in Artificial Intelligence models, such as the complex Artificial Neural Network architectures that compose the Deep Learning paradigm, has shown to provide excellent performance for complex and non-linear control problems. In this paper, the authors propose an inexpensive but powerful model based on Deep Learning to estimate the roll and sideslip angles simultaneously in mass production vehicles. The model uses input signals which can be obtained directly from onboard vehicle sensors such as the longitudinal and lateral accelerations, steering angle and roll and yaw rates. The model was trained using hundreds of thousands of data provided by Trucksim® and validated using data captured from real driving maneuvers using a calibrated ground truth device such as VBOX3i dual-antenna GPS from Racelogic®. The use of both Trucksim® software and the VBOX measuring equipment is recognized and widely used in the automotive sector, providing robust data for the research shown in this article. Full article
(This article belongs to the Special Issue Sensors and Sensor's Fusion in Autonomous Vehicles)
Show Figures

Figure 1

Open AccessArticle
Correcting Decalibration of Stereo Cameras in Self-Driving Vehicles
Sensors 2020, 20(11), 3241; https://doi.org/10.3390/s20113241 - 07 Jun 2020
Abstract
Camera systems in autonomous vehicles are subject to various sources of anticipated and unanticipated mechanical stress (vibration, rough handling, collisions) in real-world conditions. Even moderate changes in camera geometry due to mechanical stress decalibrate multi-camera systems and corrupt downstream applications like depth perception. [...] Read more.
Camera systems in autonomous vehicles are subject to various sources of anticipated and unanticipated mechanical stress (vibration, rough handling, collisions) in real-world conditions. Even moderate changes in camera geometry due to mechanical stress decalibrate multi-camera systems and corrupt downstream applications like depth perception. We propose an on-the-fly stereo recalibration method applicable in real-world autonomous vehicles. The method is comprised of two parts. First, in optimization step, external camera parameters are optimized with the goal to maximise the amount of recovered depth pixels. In the second step, external sensor is used to adjust the scaling of the optimized camera model. The method is lightweight and fast enough to run in parallel with stereo estimation, thus allowing an on-the-fly recalibration. Our extensive experimental analysis shows that our method achieves stereo reconstruction better or on par with manual calibration. If our method is used on a sequence of images, the quality of calibration can be improved even further. Full article
(This article belongs to the Special Issue Sensors and Sensor's Fusion in Autonomous Vehicles)
Show Figures

Figure 1

Open AccessArticle
Shoreline Detection and Land Segmentation for Autonomous Surface Vehicle Navigation with the Use of an Optical System
Sensors 2020, 20(10), 2799; https://doi.org/10.3390/s20102799 - 14 May 2020
Abstract
Autonomous surface vehicles (ASVs) are a critical part of recent progressive marine technologies. Their development demands the capability of optical systems to understand and interpret the surrounding landscape. This capability plays an important role in the navigation of coastal areas a safe distance [...] Read more.
Autonomous surface vehicles (ASVs) are a critical part of recent progressive marine technologies. Their development demands the capability of optical systems to understand and interpret the surrounding landscape. This capability plays an important role in the navigation of coastal areas a safe distance from land, which demands sophisticated image segmentation algorithms. For this purpose, some solutions, based on traditional image processing and neural networks, have been introduced. However, the solution of traditional image processing methods requires a set of parameters before execution, while the solution of a neural network demands a large database of labelled images. Our new solution, which avoids these drawbacks, is based on adaptive filtering and progressive segmentation. The adaptive filtering is deployed to suppress weak edges in the image, which is convenient for shoreline detection. Progressive segmentation is devoted to distinguishing the sky and land areas, using a probabilistic clustering model to improve performance. To verify the effectiveness of the proposed method, a set of images acquired from the vehicle’s operative camera were utilised. The results demonstrate that the proposed method performs with high accuracy regardless of distance from land or weather conditions. Full article
(This article belongs to the Special Issue Sensors and Sensor's Fusion in Autonomous Vehicles)
Show Figures

Figure 1

Open AccessArticle
Research on Target Detection Based on Distributed Track Fusion for Intelligent Vehicles
Sensors 2020, 20(1), 56; https://doi.org/10.3390/s20010056 - 20 Dec 2019
Cited by 1
Abstract
Accurate target detection is the basis of normal driving for intelligent vehicles. However, the sensors currently used for target detection have types of defects at the perception level, which can be compensated by sensor fusion technology. In this paper, the application of sensor [...] Read more.
Accurate target detection is the basis of normal driving for intelligent vehicles. However, the sensors currently used for target detection have types of defects at the perception level, which can be compensated by sensor fusion technology. In this paper, the application of sensor fusion technology in intelligent vehicle target detection is studied with a millimeter-wave (MMW) radar and a camera. The target level fusion hierarchy is adopted, and the fusion algorithm is divided into two tracking processing modules and one fusion center module based on the distributed structure. The measurement information output by two sensors enters the tracking processing module, and after processing by a multi-target tracking algorithm, the local tracks are generated and transmitted to the fusion center module. In the fusion center module, a two-level association structure is designed based on regional collision association and weighted track association. The association between two sensors’ local tracks is completed, and a non-reset federated filter is used to estimate the state of the fusion tracks. The experimental results indicate that the proposed algorithm can complete a tracks association between the MMW radar and camera, and the fusion track state estimation method has an excellent performance. Full article
(This article belongs to the Special Issue Sensors and Sensor's Fusion in Autonomous Vehicles)
Show Figures

Figure 1

Open AccessArticle
Multi-Level Features Extraction for Discontinuous Target Tracking in Remote Sensing Image Monitoring
Sensors 2019, 19(22), 4855; https://doi.org/10.3390/s19224855 - 07 Nov 2019
Cited by 4
Abstract
Many techniques have been developed for computer vision in the past years. Features extraction and matching are the basis of many high-level applications. In this paper, we propose a multi-level features extraction for discontinuous target tracking in remote sensing image monitoring. The features [...] Read more.
Many techniques have been developed for computer vision in the past years. Features extraction and matching are the basis of many high-level applications. In this paper, we propose a multi-level features extraction for discontinuous target tracking in remote sensing image monitoring. The features of the reference image are pre-extracted at different levels. The first-level features are used to roughly check the candidate targets and other levels are used for refined matching. With Gaussian weight function introduced, the support of matching features is accumulated to make a final decision. Adaptive neighborhood and principal component analysis are used to improve the description of the feature. Experimental results verify the efficiency and accuracy of the proposed method. Full article
(This article belongs to the Special Issue Sensors and Sensor's Fusion in Autonomous Vehicles)
Show Figures

Figure 1

Open AccessArticle
Research on Longitudinal Active Collision Avoidance of Autonomous Emergency Braking Pedestrian System (AEB-P)
Sensors 2019, 19(21), 4671; https://doi.org/10.3390/s19214671 - 28 Oct 2019
Cited by 1
Abstract
The AEB-P (Autonomous Emergency Braking Pedestrian) system has the functional requirements of avoiding the pedestrian collision and ensuring the pedestrian’s life safety. By studying relevant theoretical systems, such as TTC (time to collision) and braking safety distance, an AEB-P warning model was established, [...] Read more.
The AEB-P (Autonomous Emergency Braking Pedestrian) system has the functional requirements of avoiding the pedestrian collision and ensuring the pedestrian’s life safety. By studying relevant theoretical systems, such as TTC (time to collision) and braking safety distance, an AEB-P warning model was established, and the traffic safety level and work area of the AEB-P warning system were defined. The upper-layer fuzzy neural network controller of the AEB-P system was designed, and the BP (backpropagation) neural network was trained by collected pedestrian longitudinal anti-collision braking operation data of experienced drivers. Also, the fuzzy neural network model was optimized by introducing the genetic algorithm. The lower-layer controller of the AEB-P system was designed based on the PID (proportional integral derivative controller) theory, which realizes the conversion of the expected speed reduction to the pressure of a vehicle braking pipeline. The relevant pedestrian test scenarios were set up based on the C-NCAP (China-new car assessment program) test standards. The CarSim and Simulink co-simulation model of the AEB-P system was established, and a multi-condition simulation analysis was performed. The results showed that the proposed control strategy was credible and reliable and could flexibly allocate early warning and braking time according to the change in actual working conditions, to reduce the occurrence of pedestrian collision accidents. Full article
(This article belongs to the Special Issue Sensors and Sensor's Fusion in Autonomous Vehicles)
Show Figures

Figure 1

Open AccessArticle
Low-Cost Sensors State Estimation Algorithm for a Small Hand-Launched Solar-Powered UAV
Sensors 2019, 19(21), 4627; https://doi.org/10.3390/s19214627 - 24 Oct 2019
Cited by 2
Abstract
In order to reduce the cost of the flight controller and improve the control accuracy of solar-powered unmanned aerial vehicle (UAV), three state estimation algorithms based on the extended Kalman filter (EKF) with different structures are proposed: Three-stage series, full-state direct and indirect [...] Read more.
In order to reduce the cost of the flight controller and improve the control accuracy of solar-powered unmanned aerial vehicle (UAV), three state estimation algorithms based on the extended Kalman filter (EKF) with different structures are proposed: Three-stage series, full-state direct and indirect state estimation algorithms. A small hand-launched solar-powered UAV without ailerons is used as the object with which to compare the algorithm structure, estimation accuracy, and platform requirements and application. The three-stage estimation algorithm has a position accuracy of 6 m and is suitable for low-cost small, low control precision UAVs. The precision of full-state direct algorithm is 3.4 m, which is suitable for platforms with low-cost and high-trajectory tracking accuracy. The precision of the full-state indirect method is similar to the direct, but it is more stable for state switching, overall parameters estimation, and can be applied to large platforms. A full-scaled electric hand-launched UAV loaded with the three-stage series algorithm was used for the field test. Results verified the feasibility of the estimation algorithm and it obtained a position estimation accuracy of 23 m. Full article
(This article belongs to the Special Issue Sensors and Sensor's Fusion in Autonomous Vehicles)
Show Figures

Figure 1

Back to TopTop