sensors-logo

Journal Browser

Journal Browser

Special Issue "Sensors and Sensor's Fusion in Autonomous Vehicles"

A special issue of Sensors (ISSN 1424-8220). This special issue belongs to the section "Remote Sensors".

Deadline for manuscript submissions: 30 June 2021.

Special Issue Editors

Prof. Dr. Andrzej Stateczny
E-Mail Website
Guest Editor
Department of Geodesy, Faculty of Civil and Environmental Engineering, Gdansk Technical University, Narutowicza St. 11/12, Gdansk, Poland
Interests: radar navigation; comparative (terrain reference) navigation; multi-sensor data fusion; automotive navigation; radar and sonar target detection and tracking; sonar imaging and understanding; MBES bathymetry; autonomous navigation; artificial intelligence for navigation; deep learning; geoinformatics, underwater navigation
Special Issues and Collections in MDPI journals
Dr. Marta Wlodarczyk-Sielicka
E-Mail Website
Guest Editor
Maritime University of Szczecin, Waly Chrobrego 1-2, 70-500 Szczecin, Poland
Interests: geodata processing; 3D modelling; hydrography; bathymetric data; mapping; data reduction; spatial big data; geoinformation; spatial analyst; navigation; geo-processing; clustering; artificial neural network; remote sensing, algorithms; deep learning; data fusion, sea bottom modelling; classification; data acquisition; geospatial science; geomatics
Special Issues and Collections in MDPI journals
Dr. Pawel Burdziakowski
E-Mail Website
Guest Editor
Department of Geodesy, Faculty of Civil and Environmental Engineering, Gdansk Technical University, Narutowicza St. 11/12, Gdansk, Poland
Interests: unmanned aerial vehicles indoor navigation; autonomous navigation and algorithms; non-GNSS navigation; photogrammetry; real-time photogrammetry; UAV technology; computer vision
Special Issues and Collections in MDPI journals

Special Issue Information

Dear Collegaues,

This Special Issue seeks the submission of review and original research articles related to sensors and sensor fusion in autonomous vehicles. Autonomous vehicle navigation has been at the centre of several major developments, both in civilian and defence applications. New technologies like multisensory data fusion, big data processing, and deep learning are changing the quality of areas of applications, improving sensors and systems used. New ideas like 3D radar, 3D sonar, LIDAR, and others are based on autonomous vehicle revolutionary development.

The Special Issue is open to contributions dealing with many aspects of autonomous vehicle sensors and their fusion, like autonomous navigation, multi-sensor fusion, big data processing for autonomous vehicle navigation, sensors related to science/research, algorithms/technical development, analysis tools, synergy with sensors in navigation, and artificial intelligence methods for autonomous vehicle navigation.

Prof. Dr. Andrzej Stateczny
Dr. Marta Wlodarczyk-Sielicka
Dr. Pawel Burdziakowski
Guest Editors

Manuscript Submission Information

Manuscripts should be submitted online at www.mdpi.com by registering and logging in to this website. Once you are registered, click here to go to the submission form. Manuscripts can be submitted until the deadline. All papers will be peer-reviewed. Accepted papers will be published continuously in the journal (as soon as accepted) and will be listed together on the special issue website. Research articles, review articles as well as short communications are invited. For planned papers, a title and short abstract (about 100 words) can be sent to the Editorial Office for announcement on this website.

Submitted manuscripts should not have been published previously, nor be under consideration for publication elsewhere (except conference proceedings papers). All manuscripts are thoroughly refereed through a single-blind peer-review process. A guide for authors and other relevant information for submission of manuscripts is available on the Instructions for Authors page. Sensors is an international peer-reviewed open access semimonthly journal published by MDPI.

Please visit the Instructions for Authors page before submitting a manuscript. The Article Processing Charge (APC) for publication in this open access journal is 2200 CHF (Swiss Francs). Submitted papers should be well formatted and use good English. Authors may use MDPI's English editing service prior to publication or during author revisions.

Keywords

  • Sensor’s fusion;
  • Sensors based on autonomous navigation;
  • Comparative (terrain reference) navigation;
  • 3D radar and 3D sonar;
  • Gravity and geomagnetic sensors;
  • LiDAR;
  • Artificial intelligence in autonomous vehicles;
  • Big data processing;
  • Close-range photogrammetry and computer vision methods;
  • Deep learning algorithms;
  • Fusion of spatial data;
  • Processing of sensors data.

Published Papers (16 papers)

Order results
Result details
Select all
Export citation of selected articles as:

Research

Jump to: Review

Article
Pole-Like Object Extraction and Pole-Aided GNSS/IMU/LiDAR-SLAM System in Urban Area
Sensors 2020, 20(24), 7145; https://doi.org/10.3390/s20247145 - 13 Dec 2020
Cited by 1 | Viewed by 724
Abstract
Vision-based sensors such as LiDAR (Light Detection and Ranging) are adopted in the SLAM (Simultaneous Localization and Mapping) system. In the 16-beam LiDAR aided SLAM system, due to the difficulty of object detection by sparse laser data, neither the grid-based nor feature point-based [...] Read more.
Vision-based sensors such as LiDAR (Light Detection and Ranging) are adopted in the SLAM (Simultaneous Localization and Mapping) system. In the 16-beam LiDAR aided SLAM system, due to the difficulty of object detection by sparse laser data, neither the grid-based nor feature point-based solution can avoid the interference of moving objects. In an urban environment, the pole-like objects are common, invariant and have distinguishing characteristics. Therefore, it is suitable to bring more robust and reliable positioning results as auxiliary information in the process of vehicle positioning and navigation. In this work, we proposed a scheme of a SLAM system using a GNSS (Global Navigation Satellite System), IMU (Inertial Measurement Unit) and LiDAR sensor using the position of pole-like objects as the features for SLAM. The scheme combines a traditional preprocessing method and a small scale artificial neural network to extract the pole-like objects in environment. Firstly, the threshold-based method is used to extract the pole-like object candidates from the point cloud, and then, the neural network is applied for training and inference to obtain pole-like objects. The result shows that the accuracy and recall rate are sufficient to provide stable observation for the following SLAM process. After extracting the poles from the LiDAR point cloud, their coordinates are added to the feature map, and the nonlinear optimization of the front end is carried out by utilizing the distance constraints corresponding to the pole coordinates; then, the heading angle and horizontal plane translation are estimated. The ground feature points are used to enhance the elevation, pitch and roll angle accuracy. The performance of the proposed navigation system is evaluated through field experiments by checking the position drift and attitude errors during multiple two-min mimic GNSS outages without additional IMU motion constrain such as NHC (Nonholonomic Constrain). The experimental results show that the performance of the proposed scheme is superior to that of the conventional feature point grid-based SLAM with the same back end, especially in congested crossroads where slow-moving vehicles are surrounded and pole-like objects are rich in the environment. The mean plane position error during two-min GNSS outages was reduced by 38.5%, and the root mean square error was reduced by 35.3%. Therefore, the proposed pole-like feature-based GNSS/IMU/LiDAR SLAM system can fuse condensed information from those sensors effectively to mitigate positioning and orientation errors, even in a short-time GNSS denied environment. Full article
(This article belongs to the Special Issue Sensors and Sensor's Fusion in Autonomous Vehicles)
Show Figures

Figure 1

Article
Processing of Bathymetric Data: The Fusion of New Reduction Methods for Spatial Big Data
Sensors 2020, 20(21), 6207; https://doi.org/10.3390/s20216207 - 30 Oct 2020
Viewed by 481
Abstract
Floating autonomous vehicles are very often equipped with modern systems that collect information about the situation under the water surface, e.g., the depth or type of bottom and obstructions on the seafloor. One such system is the multibeam echosounder (MBES), which collects very [...] Read more.
Floating autonomous vehicles are very often equipped with modern systems that collect information about the situation under the water surface, e.g., the depth or type of bottom and obstructions on the seafloor. One such system is the multibeam echosounder (MBES), which collects very large sets of bathymetric data. The development and analysis of such large sets are laborious and expensive. Reduction of the spatial data obtained from bathymetric and other systems collecting spatial data is currently widely used. In commercial programs used in the development of data from hydrographic systems, methods of interpolation to a specific mesh size are very frequently used. The authors of this article previously proposed original the true bathymetric data reduction method (TBDRed) and Optimum Dataset (OptD) reduction methods, which maintain the actual position and depth for each of the measured points, without their interpolation. The effectiveness of the proposed methods has already been presented in previous articles. This article proposes the fusion of original reduction methods, which is a new and innovative approach to the problem of bathymetric data reduction. The article contains a description of the methods used and the methodology of developing bathymetric data. The proposed fusion of reduction methods allows the generation of numerical models that can be a safe, reliable source of information, and a basis for design. Numerical models can also be used in comparative navigation, during the creation of electronic navigation maps and other hydrographic products. Full article
(This article belongs to the Special Issue Sensors and Sensor's Fusion in Autonomous Vehicles)
Show Figures

Figure 1

Article
Clothoid: An Integrated Hierarchical Framework for Autonomous Driving in a Dynamic Urban Environment
Sensors 2020, 20(18), 5053; https://doi.org/10.3390/s20185053 - 05 Sep 2020
Cited by 1 | Viewed by 803
Abstract
In recent years, research and development of autonomous driving technology have gained much interest. Many autonomous driving frameworks have been developed in the past. However, building a safely operating fully functional autonomous driving framework is still a challenge. Several accidents have been occurred [...] Read more.
In recent years, research and development of autonomous driving technology have gained much interest. Many autonomous driving frameworks have been developed in the past. However, building a safely operating fully functional autonomous driving framework is still a challenge. Several accidents have been occurred with autonomous vehicles, including Tesla and Volvo XC90, resulting in serious personal injuries and death. One of the major reasons is the increase in urbanization and mobility demands. The autonomous vehicle is expected to increase road safety while reducing road accidents that occur due to human errors. The accurate sensing of the environment and safe driving under various scenarios must be ensured to achieve the highest level of autonomy. This research presents Clothoid, a unified framework for fully autonomous vehicles, that integrates the modules of HD mapping, localization, environmental perception, path planning, and control while considering the safety, comfort, and scalability in the real traffic environment. The proposed framework enables obstacle avoidance, pedestrian safety, object detection, road blockage avoidance, path planning for single-lane and multi-lane routes, and safe driving of vehicles throughout the journey. The performance of each module has been validated in K-City under multiple scenarios where Clothoid has been driven safely from the starting point to the goal point. The vehicle was one of the top five to successfully finish the autonomous vehicle challenge (AVC) in the Hyundai AVC. Full article
(This article belongs to the Special Issue Sensors and Sensor's Fusion in Autonomous Vehicles)
Show Figures

Figure 1

Article
GNSS/IMU/ODO/LiDAR-SLAM Integrated Navigation System Using IMU/ODO Pre-Integration
Sensors 2020, 20(17), 4702; https://doi.org/10.3390/s20174702 - 20 Aug 2020
Cited by 7 | Viewed by 1084
Abstract
In this paper, we proposed a multi-sensor integrated navigation system composed of GNSS (global navigation satellite system), IMU (inertial measurement unit), odometer (ODO), and LiDAR (light detection and ranging)-SLAM (simultaneous localization and mapping). The dead reckoning results were obtained using IMU/ODO in the [...] Read more.
In this paper, we proposed a multi-sensor integrated navigation system composed of GNSS (global navigation satellite system), IMU (inertial measurement unit), odometer (ODO), and LiDAR (light detection and ranging)-SLAM (simultaneous localization and mapping). The dead reckoning results were obtained using IMU/ODO in the front-end. The graph optimization was used to fuse the GNSS position, IMU/ODO pre-integration results, and the relative position and relative attitude from LiDAR-SLAM to obtain the final navigation results in the back-end. The odometer information is introduced in the pre-integration algorithm to mitigate the large drift rate of the IMU. The sliding window method was also adopted to avoid the increasing parameter numbers of the graph optimization. Land vehicle tests were conducted in both open-sky areas and tunnel cases. The tests showed that the proposed navigation system can effectually improve accuracy and robustness of navigation. During the navigation drift evaluation of the mimic two-minute GNSS outages, compared to the conventional GNSS/INS (inertial navigation system)/ODO integration, the root mean square (RMS) of the maximum position drift errors during outages in the proposed navigation system were reduced by 62.8%, 72.3%, and 52.1%, along the north, east, and height, respectively. Moreover, the yaw error was reduced by 62.1%. Furthermore, compared to the GNSS/IMU/LiDAR-SLAM integration navigation system, the assistance of the odometer and non-holonomic constraint reduced vertical error by 72.3%. The test in the real tunnel case shows that in weak environmental feature areas where the LiDAR-SLAM can barely work, the assistance of the odometer in the pre-integration is critical and can effectually reduce the positioning drift along the forward direction and maintain the SLAM in the short-term. Therefore, the proposed GNSS/IMU/ODO/LiDAR-SLAM integrated navigation system can effectually fuse the information from multiple sources to maintain the SLAM process and significantly mitigate navigation error, especially in harsh areas where the GNSS signal is severely degraded and environmental features are insufficient for LiDAR-SLAM. Full article
(This article belongs to the Special Issue Sensors and Sensor's Fusion in Autonomous Vehicles)
Show Figures

Figure 1

Article
Extended Kalman Filter (EKF) Design for Vehicle Position Tracking Using Reliability Function of Radar and Lidar
Sensors 2020, 20(15), 4126; https://doi.org/10.3390/s20154126 - 24 Jul 2020
Cited by 9 | Viewed by 1682
Abstract
Detection and distance measurement using sensors is not always accurate. Sensor fusion makes up for this shortcoming by reducing inaccuracies. This study, therefore, proposes an extended Kalman filter (EKF) that reflects the distance characteristics of lidar and radar sensors. The sensor characteristics of [...] Read more.
Detection and distance measurement using sensors is not always accurate. Sensor fusion makes up for this shortcoming by reducing inaccuracies. This study, therefore, proposes an extended Kalman filter (EKF) that reflects the distance characteristics of lidar and radar sensors. The sensor characteristics of the lidar and radar over distance were analyzed, and a reliability function was designed to extend the Kalman filter to reflect distance characteristics. The accuracy of position estimation was improved by identifying the sensor errors according to distance. Experiments were conducted using real vehicles, and a comparative experiment was done combining sensor fusion using a fuzzy, adaptive measure noise and Kalman filter. Experimental results showed that the study’s method produced accurate distance estimations. Full article
(This article belongs to the Special Issue Sensors and Sensor's Fusion in Autonomous Vehicles)
Show Figures

Figure 1

Article
Beam Search Algorithm for Anti-Collision Trajectory Planning for Many-to-Many Encounter Situations with Autonomous Surface Vehicles
Sensors 2020, 20(15), 4115; https://doi.org/10.3390/s20154115 - 24 Jul 2020
Viewed by 714
Abstract
A single anti-collision trajectory generation problem for an “own” vessel only is significantly different from the challenge of generating a whole set of safe trajectories for multi-surface vehicle encounter situations in the open sea. Effective solutions for such problems are needed these days, [...] Read more.
A single anti-collision trajectory generation problem for an “own” vessel only is significantly different from the challenge of generating a whole set of safe trajectories for multi-surface vehicle encounter situations in the open sea. Effective solutions for such problems are needed these days, as we are entering the era of autonomous ships. The article specifies the problem of anti-collision trajectory planning in many-to-many encounter situations. The proposed original multi-surface vehicle beam search algorithm (MBSA), based on the beam search strategy, solves the problem. The general idea of the MBSA involves the application of a solution for one-to-many encounter situations (using the beam search algorithm, BSA), which was tested on real automated radar plotting aid (ARPA) and automatic identification system (AIS) data. The test results for the MBSA were from simulated data, which are discussed in the final part. The article specifies the problem of anti-collision trajectory planning in many-to-many encounter situations involving moving autonomous surface vehicles, excluding Collision Regulations (COLREGs) and vehicle dynamics. Full article
(This article belongs to the Special Issue Sensors and Sensor's Fusion in Autonomous Vehicles)
Show Figures

Graphical abstract

Article
Using UAV Photogrammetry to Analyse Changes in the Coastal Zone Based on the Sopot Tombolo (Salient) Measurement Project
Sensors 2020, 20(14), 4000; https://doi.org/10.3390/s20144000 - 18 Jul 2020
Cited by 8 | Viewed by 750
Abstract
The main factors influencing the shape of the beach, shoreline and seabed include undulation, wind and coastal currents. These phenomena cause continuous and multidimensional changes in the shape of the seabed and the Earth’s surface, and when they occur in an area of [...] Read more.
The main factors influencing the shape of the beach, shoreline and seabed include undulation, wind and coastal currents. These phenomena cause continuous and multidimensional changes in the shape of the seabed and the Earth’s surface, and when they occur in an area of intense human activity, they should be constantly monitored. In 2018 and 2019, several measurement campaigns took place in the littoral zone in Sopot, related to the intensive uplift of the seabed and beach caused by the tombolo phenomenon. In this research, a unique combination of bathymetric data obtained from an unmanned surface vessel, photogrammetric data obtained from unmanned aerial vehicles and ground laser scanning were used, along with geodetic data from precision measurements with receivers of global satellite navigation systems. This paper comprehensively presents photogrammetric measurements made from unmanned aerial vehicles during these campaigns. It describes in detail the problems in reconstruction within the water areas, analyses the accuracy of various photogrammetric measurement techniques, proposes a statistical method of data filtration and presents the changes that occurred within the studies area. The work ends with an interpretation of the causes of changes in the land part of the littoral zone and a summary of the obtained results. Full article
(This article belongs to the Special Issue Sensors and Sensor's Fusion in Autonomous Vehicles)
Show Figures

Graphical abstract

Article
Simultaneous Estimation of Vehicle Roll and Sideslip Angles through a Deep Learning Approach
Sensors 2020, 20(13), 3679; https://doi.org/10.3390/s20133679 - 30 Jun 2020
Cited by 2 | Viewed by 970
Abstract
Presently, autonomous vehicles are on the rise and are expected to be on the roads in the coming years. In this sense, it becomes necessary to have adequate knowledge about its states to design controllers capable of providing adequate performance in all driving [...] Read more.
Presently, autonomous vehicles are on the rise and are expected to be on the roads in the coming years. In this sense, it becomes necessary to have adequate knowledge about its states to design controllers capable of providing adequate performance in all driving scenarios. Sideslip and roll angles are critical parameters in vehicular lateral stability. The later has a high impact on vehicles with an elevated center of gravity, such as trucks, buses, and industrial vehicles, among others, as they are prone to rollover. Due to the high cost of the current sensors used to measure these angles directly, much of the research is focused on estimating them. One of the drawbacks is that vehicles are strong non-linear systems that require specific methods able to tackle this feature. The evolution in Artificial Intelligence models, such as the complex Artificial Neural Network architectures that compose the Deep Learning paradigm, has shown to provide excellent performance for complex and non-linear control problems. In this paper, the authors propose an inexpensive but powerful model based on Deep Learning to estimate the roll and sideslip angles simultaneously in mass production vehicles. The model uses input signals which can be obtained directly from onboard vehicle sensors such as the longitudinal and lateral accelerations, steering angle and roll and yaw rates. The model was trained using hundreds of thousands of data provided by Trucksim® and validated using data captured from real driving maneuvers using a calibrated ground truth device such as VBOX3i dual-antenna GPS from Racelogic®. The use of both Trucksim® software and the VBOX measuring equipment is recognized and widely used in the automotive sector, providing robust data for the research shown in this article. Full article
(This article belongs to the Special Issue Sensors and Sensor's Fusion in Autonomous Vehicles)
Show Figures

Figure 1

Article
Correcting Decalibration of Stereo Cameras in Self-Driving Vehicles
Sensors 2020, 20(11), 3241; https://doi.org/10.3390/s20113241 - 07 Jun 2020
Cited by 1 | Viewed by 785
Abstract
Camera systems in autonomous vehicles are subject to various sources of anticipated and unanticipated mechanical stress (vibration, rough handling, collisions) in real-world conditions. Even moderate changes in camera geometry due to mechanical stress decalibrate multi-camera systems and corrupt downstream applications like depth perception. [...] Read more.
Camera systems in autonomous vehicles are subject to various sources of anticipated and unanticipated mechanical stress (vibration, rough handling, collisions) in real-world conditions. Even moderate changes in camera geometry due to mechanical stress decalibrate multi-camera systems and corrupt downstream applications like depth perception. We propose an on-the-fly stereo recalibration method applicable in real-world autonomous vehicles. The method is comprised of two parts. First, in optimization step, external camera parameters are optimized with the goal to maximise the amount of recovered depth pixels. In the second step, external sensor is used to adjust the scaling of the optimized camera model. The method is lightweight and fast enough to run in parallel with stereo estimation, thus allowing an on-the-fly recalibration. Our extensive experimental analysis shows that our method achieves stereo reconstruction better or on par with manual calibration. If our method is used on a sequence of images, the quality of calibration can be improved even further. Full article
(This article belongs to the Special Issue Sensors and Sensor's Fusion in Autonomous Vehicles)
Show Figures

Figure 1

Article
Shoreline Detection and Land Segmentation for Autonomous Surface Vehicle Navigation with the Use of an Optical System
Sensors 2020, 20(10), 2799; https://doi.org/10.3390/s20102799 - 14 May 2020
Cited by 1 | Viewed by 868
Abstract
Autonomous surface vehicles (ASVs) are a critical part of recent progressive marine technologies. Their development demands the capability of optical systems to understand and interpret the surrounding landscape. This capability plays an important role in the navigation of coastal areas a safe distance [...] Read more.
Autonomous surface vehicles (ASVs) are a critical part of recent progressive marine technologies. Their development demands the capability of optical systems to understand and interpret the surrounding landscape. This capability plays an important role in the navigation of coastal areas a safe distance from land, which demands sophisticated image segmentation algorithms. For this purpose, some solutions, based on traditional image processing and neural networks, have been introduced. However, the solution of traditional image processing methods requires a set of parameters before execution, while the solution of a neural network demands a large database of labelled images. Our new solution, which avoids these drawbacks, is based on adaptive filtering and progressive segmentation. The adaptive filtering is deployed to suppress weak edges in the image, which is convenient for shoreline detection. Progressive segmentation is devoted to distinguishing the sky and land areas, using a probabilistic clustering model to improve performance. To verify the effectiveness of the proposed method, a set of images acquired from the vehicle’s operative camera were utilised. The results demonstrate that the proposed method performs with high accuracy regardless of distance from land or weather conditions. Full article
(This article belongs to the Special Issue Sensors and Sensor's Fusion in Autonomous Vehicles)
Show Figures

Figure 1

Article
Research on Target Detection Based on Distributed Track Fusion for Intelligent Vehicles
Sensors 2020, 20(1), 56; https://doi.org/10.3390/s20010056 - 20 Dec 2019
Cited by 7 | Viewed by 950
Abstract
Accurate target detection is the basis of normal driving for intelligent vehicles. However, the sensors currently used for target detection have types of defects at the perception level, which can be compensated by sensor fusion technology. In this paper, the application of sensor [...] Read more.
Accurate target detection is the basis of normal driving for intelligent vehicles. However, the sensors currently used for target detection have types of defects at the perception level, which can be compensated by sensor fusion technology. In this paper, the application of sensor fusion technology in intelligent vehicle target detection is studied with a millimeter-wave (MMW) radar and a camera. The target level fusion hierarchy is adopted, and the fusion algorithm is divided into two tracking processing modules and one fusion center module based on the distributed structure. The measurement information output by two sensors enters the tracking processing module, and after processing by a multi-target tracking algorithm, the local tracks are generated and transmitted to the fusion center module. In the fusion center module, a two-level association structure is designed based on regional collision association and weighted track association. The association between two sensors’ local tracks is completed, and a non-reset federated filter is used to estimate the state of the fusion tracks. The experimental results indicate that the proposed algorithm can complete a tracks association between the MMW radar and camera, and the fusion track state estimation method has an excellent performance. Full article
(This article belongs to the Special Issue Sensors and Sensor's Fusion in Autonomous Vehicles)
Show Figures

Figure 1

Article
Multi-Level Features Extraction for Discontinuous Target Tracking in Remote Sensing Image Monitoring
Sensors 2019, 19(22), 4855; https://doi.org/10.3390/s19224855 - 07 Nov 2019
Cited by 13 | Viewed by 926
Abstract
Many techniques have been developed for computer vision in the past years. Features extraction and matching are the basis of many high-level applications. In this paper, we propose a multi-level features extraction for discontinuous target tracking in remote sensing image monitoring. The features [...] Read more.
Many techniques have been developed for computer vision in the past years. Features extraction and matching are the basis of many high-level applications. In this paper, we propose a multi-level features extraction for discontinuous target tracking in remote sensing image monitoring. The features of the reference image are pre-extracted at different levels. The first-level features are used to roughly check the candidate targets and other levels are used for refined matching. With Gaussian weight function introduced, the support of matching features is accumulated to make a final decision. Adaptive neighborhood and principal component analysis are used to improve the description of the feature. Experimental results verify the efficiency and accuracy of the proposed method. Full article
(This article belongs to the Special Issue Sensors and Sensor's Fusion in Autonomous Vehicles)
Show Figures

Figure 1

Article
Research on Longitudinal Active Collision Avoidance of Autonomous Emergency Braking Pedestrian System (AEB-P)
Sensors 2019, 19(21), 4671; https://doi.org/10.3390/s19214671 - 28 Oct 2019
Cited by 11 | Viewed by 1651
Abstract
The AEB-P (Autonomous Emergency Braking Pedestrian) system has the functional requirements of avoiding the pedestrian collision and ensuring the pedestrian’s life safety. By studying relevant theoretical systems, such as TTC (time to collision) and braking safety distance, an AEB-P warning model was established, [...] Read more.
The AEB-P (Autonomous Emergency Braking Pedestrian) system has the functional requirements of avoiding the pedestrian collision and ensuring the pedestrian’s life safety. By studying relevant theoretical systems, such as TTC (time to collision) and braking safety distance, an AEB-P warning model was established, and the traffic safety level and work area of the AEB-P warning system were defined. The upper-layer fuzzy neural network controller of the AEB-P system was designed, and the BP (backpropagation) neural network was trained by collected pedestrian longitudinal anti-collision braking operation data of experienced drivers. Also, the fuzzy neural network model was optimized by introducing the genetic algorithm. The lower-layer controller of the AEB-P system was designed based on the PID (proportional integral derivative controller) theory, which realizes the conversion of the expected speed reduction to the pressure of a vehicle braking pipeline. The relevant pedestrian test scenarios were set up based on the C-NCAP (China-new car assessment program) test standards. The CarSim and Simulink co-simulation model of the AEB-P system was established, and a multi-condition simulation analysis was performed. The results showed that the proposed control strategy was credible and reliable and could flexibly allocate early warning and braking time according to the change in actual working conditions, to reduce the occurrence of pedestrian collision accidents. Full article
(This article belongs to the Special Issue Sensors and Sensor's Fusion in Autonomous Vehicles)
Show Figures

Figure 1

Article
Low-Cost Sensors State Estimation Algorithm for a Small Hand-Launched Solar-Powered UAV
Sensors 2019, 19(21), 4627; https://doi.org/10.3390/s19214627 - 24 Oct 2019
Cited by 5 | Viewed by 932
Abstract
In order to reduce the cost of the flight controller and improve the control accuracy of solar-powered unmanned aerial vehicle (UAV), three state estimation algorithms based on the extended Kalman filter (EKF) with different structures are proposed: Three-stage series, full-state direct and indirect [...] Read more.
In order to reduce the cost of the flight controller and improve the control accuracy of solar-powered unmanned aerial vehicle (UAV), three state estimation algorithms based on the extended Kalman filter (EKF) with different structures are proposed: Three-stage series, full-state direct and indirect state estimation algorithms. A small hand-launched solar-powered UAV without ailerons is used as the object with which to compare the algorithm structure, estimation accuracy, and platform requirements and application. The three-stage estimation algorithm has a position accuracy of 6 m and is suitable for low-cost small, low control precision UAVs. The precision of full-state direct algorithm is 3.4 m, which is suitable for platforms with low-cost and high-trajectory tracking accuracy. The precision of the full-state indirect method is similar to the direct, but it is more stable for state switching, overall parameters estimation, and can be applied to large platforms. A full-scaled electric hand-launched UAV loaded with the three-stage series algorithm was used for the field test. Results verified the feasibility of the estimation algorithm and it obtained a position estimation accuracy of 23 m. Full article
(This article belongs to the Special Issue Sensors and Sensor's Fusion in Autonomous Vehicles)
Show Figures

Figure 1

Review

Jump to: Research

Review
The Perception System of Intelligent Ground Vehicles in All Weather Conditions: A Systematic Literature Review
Sensors 2020, 20(22), 6532; https://doi.org/10.3390/s20226532 - 15 Nov 2020
Cited by 1 | Viewed by 1145
Abstract
Perception is a vital part of driving. Every year, the loss in visibility due to snow, fog, and rain causes serious accidents worldwide. Therefore, it is important to be aware of the impact of weather conditions on perception performance while driving on highways [...] Read more.
Perception is a vital part of driving. Every year, the loss in visibility due to snow, fog, and rain causes serious accidents worldwide. Therefore, it is important to be aware of the impact of weather conditions on perception performance while driving on highways and urban traffic in all weather conditions. The goal of this paper is to provide a survey of sensing technologies used to detect the surrounding environment and obstacles during driving maneuvers in different weather conditions. Firstly, some important historical milestones are presented. Secondly, the state-of-the-art automated driving applications (adaptive cruise control, pedestrian collision avoidance, etc.) are introduced with a focus on all-weather activity. Thirdly, the most involved sensor technologies (radar, lidar, ultrasonic, camera, and far-infrared) employed by automated driving applications are studied. Furthermore, the difference between the current and expected states of performance is determined by the use of spider charts. As a result, a fusion perspective is proposed that can fill gaps and increase the robustness of the perception system. Full article
(This article belongs to the Special Issue Sensors and Sensor's Fusion in Autonomous Vehicles)
Show Figures

Figure 1

Review
A Review of Environmental Context Detection for Navigation Based on Multiple Sensors
Sensors 2020, 20(16), 4532; https://doi.org/10.3390/s20164532 - 13 Aug 2020
Cited by 3 | Viewed by 726
Abstract
Current navigation systems use multi-sensor data to improve the localization accuracy, but often without certitude on the quality of those measurements in certain situations. The context detection will enable us to build an adaptive navigation system to improve the precision and the robustness [...] Read more.
Current navigation systems use multi-sensor data to improve the localization accuracy, but often without certitude on the quality of those measurements in certain situations. The context detection will enable us to build an adaptive navigation system to improve the precision and the robustness of its localization solution by anticipating possible degradation in sensor signal quality (GNSS in urban canyons for instance or camera-based navigation in a non-textured environment). That is why context detection is considered the future of navigation systems. Thus, it is important firstly to define this concept of context for navigation and to find a way to extract it from available information. This paper overviews existing GNSS and on-board vision-based solutions of environmental context detection. This review shows that most of the state-of-the art research works focus on only one type of data. It confirms that the main perspective of this problem is to combine different indicators from multiple sensors. Full article
(This article belongs to the Special Issue Sensors and Sensor's Fusion in Autonomous Vehicles)
Show Figures

Figure 1

Back to TopTop