sensors-logo

Journal Browser

Journal Browser

New Trends towards Automatic Vehicle Control and Perception Systems

A special issue of Sensors (ISSN 1424-8220). This special issue belongs to the section "Physical Sensors".

Deadline for manuscript submissions: closed (15 October 2012) | Viewed by 349823

Special Issue Editors


E-Mail Website
Guest Editor
ME/Fulbright Visiting Scholar at California PATH, University of California, Berkeley, CA 94720, USA
Interests: autonomous vehicles; fuzzy logic control; intelligent traffic and transport infrastructure and vehicle-infrastructure cooperation

E-Mail Website
Guest Editor
Department of Electronics, Polytechnic School. University Campus, 28805 Alcalá de Henares, Madrid, Spain
Interests: computer vision; perception systems; advanced driver assistance systems; assistant robotics; simultaneous localization and mapping

Special Issue Information

Dear Colleagues,

The growth in the number of drivers in the last decade, and consequently in the number of vehicles, has caused that congestion has became a major concern in the road transportation; specifically, in urban areas. New policies on the part of the Governments are focused on the development of a safer, more secure and more efficient transportation systems. In this connection, one of the most important topics in the road transportation field is the development of intelligent devices and systems for improving both traffic flow and safety. New trends on perception systems can contribute to these goals as well as the development of Service Robots in an effort to increase the quality of living of citizens in metropolitan areas.

This special issue aims to disseminate recent advances on in-car sensors in order to develop advanced vehicle control systems towards a more sustainability transport. Novel theoretical approaches or practical applications of in-car sensors toward the design, development and implementation of intelligent vehicles in real and complex environments are welcomed. On the other hand, this special issue is looking for original research works on perception systems in robotics, intelligent vehicles and driver assistance systems. Contributions from Workshops in IEEE Intelligent Vehicle Symposium 2012 (IEEE IV 2012), http://www.robesafe.es/iv2012 with extended results are also welcomed.

Topics of interest of this special issue include, but are not limited to, the following topics:

  • sensors integration for positioning system
  • real-time motion planning
  • autonomous navigation systems
  • intelligent maneuvering in complex environments
  • obstacle avoidance
  • traffic road detection
  • off-road guidance
  • parking aid systems
  • environment perception
  • road safety applications
  • unmanned vehicles
  • collision prediction and mitigation
  • driver assistance systems
  • safety systems
  • simultaneous localization and mapping
  • 3D reconstruction
  • cooperative perception
  • new sensing devices
  • human-robot interaction

Dr. Vicente Milanés
Prof. Dr. Luis M. Bergasa
Guest Editors

Keywords

  • road vehicle control
  • intelligent systems
  • multi-sensor fusion
  • autonomous navigation
  • advanced perception systems
  • accurate world modeling
  • simultaneous localization and mapping
  • 3D reconstruction
  • cooperative perception
  • new sensing devices
  • human-robot interaction

Benefits of Publishing in a Special Issue

  • Ease of navigation: Grouping papers by topic helps scholars navigate broad scope journals more efficiently.
  • Greater discoverability: Special Issues support the reach and impact of scientific research. Articles in Special Issues are more discoverable and cited more frequently.
  • Expansion of research network: Special Issues facilitate connections among authors, fostering scientific collaborations.
  • External promotion: Articles in Special Issues are often promoted through the journal's social media, increasing their visibility.
  • e-Book format: Special Issues with more than 10 articles can be published as dedicated e-books, ensuring wide and rapid dissemination.

Further information on MDPI's Special Issue polices can be found here.

Published Papers (34 papers)

Order results
Result details
Select all
Export citation of selected articles as:

Editorial

Jump to: Research

58 KiB  
Editorial
Introduction to the Special Issue on “New Trends towards Automatic Vehicle Control and Perception Systems”
by Vicente Milanés and Luis M. Bergasa
Sensors 2013, 13(5), 5712-5719; https://doi.org/10.3390/s130505712 - 2 May 2013
Cited by 7 | Viewed by 6077
Abstract
Intelligent and automatic systems are making our daily life easier. They are able to automate tasks that, up to now, were performed by humans, freeing them from these tedious tasks. They are mainly based on the classical robotic architectures where the stages of [...] Read more.
Intelligent and automatic systems are making our daily life easier. They are able to automate tasks that, up to now, were performed by humans, freeing them from these tedious tasks. They are mainly based on the classical robotic architectures where the stages of perception—using different sensor sources or even a fusion of a set of them—and planning—where intelligent control systems are applied—play a key role. Among all of the fields in which intelligent systems can be applied, transport systems are considered one of the most promising ones since over one million fatalities—including drivers, pedestrians, cyclists and motorcyclists—are registered each year worldwide and they can definitively help to reduce these figures. [...] Full article
(This article belongs to the Special Issue New Trends towards Automatic Vehicle Control and Perception Systems)

Research

Jump to: Editorial

4028 KiB  
Article
Localization and Mapping Using Only a Rotating FMCW Radar Sensor
by Damien Vivet, Paul Checchin and Roland Chapuis
Sensors 2013, 13(4), 4527-4552; https://doi.org/10.3390/s130404527 - 8 Apr 2013
Cited by 70 | Viewed by 10958
Abstract
Rotating radar sensors are perception systems rarely used in mobile robotics. This paper is concerned with the use of a mobile ground-based panoramic radar sensor which is able to deliver both distance and velocity of multiple targets in its surrounding. The consequence of [...] Read more.
Rotating radar sensors are perception systems rarely used in mobile robotics. This paper is concerned with the use of a mobile ground-based panoramic radar sensor which is able to deliver both distance and velocity of multiple targets in its surrounding. The consequence of using such a sensor in high speed robotics is the appearance of both geometric and Doppler velocity distortions in the collected data. These effects are, in the majority of studies, ignored or considered as noise and then corrected based on proprioceptive sensors or localization systems. Our purpose is to study and use data distortion and Doppler effect as sources of information in order to estimate the vehicle’s displacement. The linear and angular velocities of the mobile robot are estimated by analyzing the distortion of the measurements provided by the panoramic Frequency Modulated Continuous Wave (FMCW) radar, called IMPALA. Without the use of any proprioceptive sensor, these estimates are then used to build the trajectory of the vehicle and the radar map of outdoor environments. In this paper, radar-only localization and mapping results are presented for a ground vehicle moving at high speed. Full article
(This article belongs to the Special Issue New Trends towards Automatic Vehicle Control and Perception Systems)
Show Figures

4083 KiB  
Article
Driver Assistance System for Passive Multi-Trailer Vehicles with Haptic Steering Limitations on the Leading Unit
by Jesús Morales, Anthony Mandow, Jorge L. Martínez, Antonio J. Reina and Alfonso García-Cerezo
Sensors 2013, 13(4), 4485-4498; https://doi.org/10.3390/s130404485 - 3 Apr 2013
Cited by 8 | Viewed by 11196
Abstract
Driving vehicles with one or more passive trailers has difficulties in both forward and backward motion due to inter-unit collisions, jackknife, and lack of visibility. Consequently, advanced driver assistance systems (ADAS) for multi-trailer combinations can be beneficial to accident avoidance as well as [...] Read more.
Driving vehicles with one or more passive trailers has difficulties in both forward and backward motion due to inter-unit collisions, jackknife, and lack of visibility. Consequently, advanced driver assistance systems (ADAS) for multi-trailer combinations can be beneficial to accident avoidance as well as to driver comfort. The ADAS proposed in this paper aims to prevent unsafe steering commands by means of a haptic handwheel. Furthermore, when driving in reverse, the steering-wheel and pedals can be used as if the vehicle was driven from the back of the last trailer with visual aid from a rear-view camera. This solution, which can be implemented in drive-by-wire vehicles with hitch angle sensors, profits from two methods previously developed by the authors: safe steering by applying a curvature limitation to the leading unit, and a virtual tractor concept for backward motion that includes the complex case of set-point propagation through on-axle hitches. The paper addresses system requirements and provides implementation details to tele-operate two different off- and on-axle combinations of a tracked mobile robot pulling and pushing two dissimilar trailers. Full article
(This article belongs to the Special Issue New Trends towards Automatic Vehicle Control and Perception Systems)
Show Figures

2380 KiB  
Article
Optical Flow and Driver’s Kinematics Analysis for State of Alert Sensing
by Javier Jiménez-Pinto and Miguel Torres-Torriti
Sensors 2013, 13(4), 4225-4257; https://doi.org/10.3390/s130404225 - 28 Mar 2013
Cited by 18 | Viewed by 7899
Abstract
Road accident statistics from different countries show that a significant number of accidents occur due to driver’s fatigue and lack of awareness to traffic conditions. In particular, about 60% of the accidents in which long haul truck and bus drivers are involved are [...] Read more.
Road accident statistics from different countries show that a significant number of accidents occur due to driver’s fatigue and lack of awareness to traffic conditions. In particular, about 60% of the accidents in which long haul truck and bus drivers are involved are attributed to drowsiness and fatigue. It is thus fundamental to improve non-invasive systems for sensing a driver’s state of alert. One of the main challenges to correctly resolve the state of alert is measuring the percentage of eyelid closure over time (PERCLOS), despite the driver’s head and body movements. In this paper, we propose a technique that involves optical flow and driver’s kinematics analysis to improve the robustness of the driver’s alert state measurement under pose changes using a single camera with near-infrared illumination. The proposed approach infers and keeps track of the driver’s pose in 3D space in order to ensure that eyes can be located correctly, even after periods of partial occlusion, for example, when the driver stares away from the camera. Our experiments show the effectiveness of the approach with a correct eyes detection rate of 99.41%, on average. The results obtained with the proposed approach in an experiment involving fifteen persons under different levels of sleep deprivation also confirm the discriminability of the fatigue levels. In addition to the measurement of fatigue and drowsiness, the pose tracking capability of the proposed approach has potential applications in distraction assessment and alerting of machine operators. Full article
(This article belongs to the Special Issue New Trends towards Automatic Vehicle Control and Perception Systems)
Show Figures

4727 KiB  
Article
Robust Lane Sensing and Departure Warning under Shadows and Occlusions
by Rodolfo Tapia-Espinoza and Miguel Torres-Torriti
Sensors 2013, 13(3), 3270-3298; https://doi.org/10.3390/s130303270 - 11 Mar 2013
Cited by 36 | Viewed by 9277
Abstract
A prerequisite for any system that enhances drivers’ awareness of road conditions and threatening situations is the correct sensing of the road geometry and the vehicle’s relative pose with respect to the lane despite shadows and occlusions. In this paper we propose an [...] Read more.
A prerequisite for any system that enhances drivers’ awareness of road conditions and threatening situations is the correct sensing of the road geometry and the vehicle’s relative pose with respect to the lane despite shadows and occlusions. In this paper we propose an approach for lane segmentation and tracking that is robust to varying shadows and occlusions. The approach involves color-based clustering, the use of MSAC for outlier removal and curvature estimation, and also the tracking of lane boundaries. Lane boundaries are modeled as planar curves residing in 3D-space using an inverse perspective mapping, instead of the traditional tracking of lanes in the image space, i.e., the segmented lane boundary points are 3D points in a coordinate frame fixed to the vehicle that have a depth component and belong to a plane tangent to the vehicle’s wheels, rather than 2D points in the image space without depth information. The measurement noise and disturbances due to vehicle vibrations are reduced using an extended Kalman filter that involves a 6-DOF motion model for the vehicle, as well as measurements about the road’s banking and slope angles. Additional contributions of the paper include: (i) the comparison of textural features obtained from a bank of Gabor filters and from a GMRF model; and (ii) the experimental validation of the quadratic and cubic approximations to the clothoid model for the lane boundaries. The results show that the proposed approach performs better than the traditional gradient-based approach under different levels of difficulty caused by shadows and occlusions. Full article
(This article belongs to the Special Issue New Trends towards Automatic Vehicle Control and Perception Systems)
Show Figures

2478 KiB  
Article
Dynamic Obstacle Avoidance Using Bayesian Occupancy Filter and Approximate Inference
by Ángel Llamazares, Vladimir Ivan, Eduardo Molinos, Manuel Ocaña and Sethu Vijayakumar
Sensors 2013, 13(3), 2929-2944; https://doi.org/10.3390/s130302929 - 1 Mar 2013
Cited by 17 | Viewed by 8895
Abstract
The goal of this paper is to solve the problem of dynamic obstacle avoidance for a mobile platform using the stochastic optimal control framework to compute paths that are optimal in terms of safety and energy efficiency under constraints. We propose a threedimensional [...] Read more.
The goal of this paper is to solve the problem of dynamic obstacle avoidance for a mobile platform using the stochastic optimal control framework to compute paths that are optimal in terms of safety and energy efficiency under constraints. We propose a threedimensional extension of the Bayesian Occupancy Filter (BOF) (Cou´e et al. Int. J. Rob. Res. 2006, 25, 19–30) to deal with the noise in the sensor data, improving the perception stage. We reduce the computational cost of the perception stage by estimating the velocity of each obstacle using optical flow tracking and blob filtering. While several obstacle avoidance systems have been presented in the literature addressing safety and optimality of the robot motion separately, we have applied the approximate inference framework to this problem to combine multiple goals, constraints and priors in a structured way. It is important to remark that the problem involves obstacles that can be moving, therefore classical techniques based on reactive control are not optimal from the point of view of energy consumption. Some experimental results, including comparisons against classical algorithms that highlight the advantages, are presented. Full article
(This article belongs to the Special Issue New Trends towards Automatic Vehicle Control and Perception Systems)
Show Figures

1283 KiB  
Article
Autonomous Docking Based on Infrared System for Electric Vehicle Charging in Urban Areas
by Joshué Pérez, Fawzi Nashashibi, Benjamin Lefaudeux, Paulo Resende and Evangeline Pollard
Sensors 2013, 13(2), 2645-2663; https://doi.org/10.3390/s130202645 - 21 Feb 2013
Cited by 21 | Viewed by 11285
Abstract
Electric vehicles are progressively introduced in urban areas, because of their ability to reduce air pollution, fuel consumption and noise nuisance. Nowadays, some big cities are launching the first electric car-sharing projects to clear traffic jams and enhance urban mobility, as an alternative [...] Read more.
Electric vehicles are progressively introduced in urban areas, because of their ability to reduce air pollution, fuel consumption and noise nuisance. Nowadays, some big cities are launching the first electric car-sharing projects to clear traffic jams and enhance urban mobility, as an alternative to the classic public transportation systems. However, there are still some problems to be solved related to energy storage, electric charging and autonomy. In this paper, we present an autonomous docking system for electric vehicles recharging based on an embarked infrared camera performing infrared beacons detection installed in the infrastructure. A visual servoing system coupled with an automatic controller allows the vehicle to dock accurately to the recharging booth in a street parking area. The results show good behavior of the implemented system, which is currently deployed as a real prototype system in the city of Paris. Full article
(This article belongs to the Special Issue New Trends towards Automatic Vehicle Control and Perception Systems)
Show Figures

717 KiB  
Article
Low-Cost MEMS Sensors and Vision System for Motion and Position Estimation of a Scooter
by Alberto Guarnieri, Francesco Pirotti and Antonio Vettore
Sensors 2013, 13(2), 1510-1522; https://doi.org/10.3390/s130201510 - 24 Jan 2013
Cited by 22 | Viewed by 8660
Abstract
The possibility to identify with significant accuracy the position of a vehicle in a mapping reference frame for driving directions and best-route analysis is a topic which is attracting a lot of interest from the research and development sector. To reach the objective [...] Read more.
The possibility to identify with significant accuracy the position of a vehicle in a mapping reference frame for driving directions and best-route analysis is a topic which is attracting a lot of interest from the research and development sector. To reach the objective of accurate vehicle positioning and integrate response events, it is necessary to estimate position, orientation and velocity of the system with high measurement rates. In this work we test a system which uses low-cost sensors, based on Micro Electro-Mechanical Systems (MEMS) technology, coupled with information derived from a video camera placed on a two-wheel motor vehicle (scooter). In comparison to a four-wheel vehicle; the dynamics of a two-wheel vehicle feature a higher level of complexity given that more degrees of freedom must be taken into account. For example a motorcycle can twist sideways; thus generating a roll angle. A slight pitch angle has to be considered as well; since wheel suspensions have a higher degree of motion compared to four-wheel motor vehicles. In this paper we present a method for the accurate reconstruction of the trajectory of a “Vespa” scooter; which can be used as alternative to the “classical” approach based on GPS/INS sensor integration. Position and orientation of the scooter are obtained by integrating MEMS-based orientation sensor data with digital images through a cascade of a Kalman filter and a Bayesian particle filter. Full article
(This article belongs to the Special Issue New Trends towards Automatic Vehicle Control and Perception Systems)
Show Figures

2174 KiB  
Article
Complete Low-Cost Implementation of a Teleoperated Control System for a Humanoid Robot
by Andrés Cela, J. Javier Yebes, Roberto Arroyo, Luis M. Bergasa, Rafael Barea and Elena López
Sensors 2013, 13(2), 1385-1401; https://doi.org/10.3390/s130201385 - 24 Jan 2013
Cited by 24 | Viewed by 11341
Abstract
Humanoid robotics is a field of a great research interest nowadays. This work implements a low-cost teleoperated system to control a humanoid robot, as a first step for further development and study of human motion and walking. A human suit is built, consisting [...] Read more.
Humanoid robotics is a field of a great research interest nowadays. This work implements a low-cost teleoperated system to control a humanoid robot, as a first step for further development and study of human motion and walking. A human suit is built, consisting of 8 sensors, 6 resistive linear potentiometers on the lower extremities and 2 digital accelerometers for the arms. The goal is to replicate the suit movements in a small humanoid robot. The data from the sensors is wirelessly transmitted via two ZigBee RF configurable modules installed on each device: the robot and the suit. Replicating the suit movements requires a robot stability control module to prevent falling down while executing different actions involving knees flexion. This is carried out via a feedback control system with an accelerometer placed on the robot’s back. The measurement from this sensor is filtered using Kalman. In addition, a two input fuzzy algorithm controlling five servo motors regulates the robot balance. The humanoid robot is controlled by a medium capacity processor and a low computational cost is achieved for executing the different algorithms. Both hardware and software of the system are based on open platforms. The successful experiments carried out validate the implementation of the proposed teleoperated system. Full article
(This article belongs to the Special Issue New Trends towards Automatic Vehicle Control and Perception Systems)
Show Figures

3110 KiB  
Article
Robot Evolutionary Localization Based on Attentive Visual Short-Term Memory
by Julio Vega, Eduardo Perdices and José M. Cañas
Sensors 2013, 13(1), 1268-1299; https://doi.org/10.3390/s130101268 - 21 Jan 2013
Cited by 8 | Viewed by 7440
Abstract
Cameras are one of the most relevant sensors in autonomous robots. However, two of their challenges are to extract useful information from captured images, and to manage the small field of view of regular cameras. This paper proposes implementing a dynamic visual memory [...] Read more.
Cameras are one of the most relevant sensors in autonomous robots. However, two of their challenges are to extract useful information from captured images, and to manage the small field of view of regular cameras. This paper proposes implementing a dynamic visual memory to store the information gathered from a moving camera on board a robot, followed by an attention system to choose where to look with this mobile camera, and a visual localization algorithm that incorporates this visual memory. The visual memory is a collection of relevant task-oriented objects and 3D segments, and its scope is wider than the current camera field of view. The attention module takes into account the need to reobserve objects in the visual memory and the need to explore new areas. The visual memory is useful also in localization tasks, as it provides more information about robot surroundings than the current instantaneous image. This visual system is intended as underlying technology for service robot applications in real people’s homes. Several experiments have been carried out, both with simulated and real Pioneer and Nao robots, to validate the system and each of its components in office scenarios. Full article
(This article belongs to the Special Issue New Trends towards Automatic Vehicle Control and Perception Systems)
Show Figures

2769 KiB  
Article
An Aerial-Ground Robotic System for Navigation and Obstacle Mapping in Large Outdoor Areas
by Mario Garzón, João Valente, David Zapata and Antonio Barrientos
Sensors 2013, 13(1), 1247-1267; https://doi.org/10.3390/s130101247 - 21 Jan 2013
Cited by 72 | Viewed by 11699
Abstract
There are many outdoor robotic applications where a robot must reach a goal position or explore an area without previous knowledge of the environment around it. Additionally, other applications (like path planning) require the use of known maps or previous information of the [...] Read more.
There are many outdoor robotic applications where a robot must reach a goal position or explore an area without previous knowledge of the environment around it. Additionally, other applications (like path planning) require the use of known maps or previous information of the environment. This work presents a system composed by a terrestrial and an aerial robot that cooperate and share sensor information in order to address those requirements. The ground robot is able to navigate in an unknown large environment aided by visual feedback from a camera on board the aerial robot. At the same time, the obstacles are mapped in real-time by putting together the information from the camera and the positioning system of the ground robot. A set of experiments were carried out with the purpose of verifying the system applicability. The experiments were performed in a simulation environment and outdoor with a medium-sized ground robot and a mini quad-rotor. The proposed robotic system shows outstanding results in simultaneous navigation and mapping applications in large outdoor environments. Full article
(This article belongs to the Special Issue New Trends towards Automatic Vehicle Control and Perception Systems)
Show Figures

Graphical abstract

407 KiB  
Article
A Novel Scheme for DVL-Aided SINS In-Motion Alignment Using UKF Techniques
by Wanli Li, Jinling Wang, Liangqing Lu and Wenqi Wu
Sensors 2013, 13(1), 1046-1063; https://doi.org/10.3390/s130101046 - 15 Jan 2013
Cited by 86 | Viewed by 8342
Abstract
In-motion alignment of Strapdown Inertial Navigation Systems (SINS) without any geodetic-frame observations is one of the toughest challenges for Autonomous Underwater Vehicles (AUV). This paper presents a novel scheme for Doppler Velocity Log (DVL) aided SINS alignment using Unscented Kalman Filter (UKF) which [...] Read more.
In-motion alignment of Strapdown Inertial Navigation Systems (SINS) without any geodetic-frame observations is one of the toughest challenges for Autonomous Underwater Vehicles (AUV). This paper presents a novel scheme for Doppler Velocity Log (DVL) aided SINS alignment using Unscented Kalman Filter (UKF) which allows large initial misalignments. With the proposed mechanism, a nonlinear SINS error model is presented and the measurement model is derived under the assumption that large misalignments may exist. Since a priori knowledge of the measurement noise covariance is of great importance to robustness of the UKF, the covariance-matching methods widely used in the Adaptive KF (AKF) are extended for use in Adaptive UKF (AUKF). Experimental results show that the proposed DVL-aided alignment model is effective with any initial heading errors. The performances of the adaptive filtering methods are evaluated with regards to their parameter estimation stability. Furthermore, it is clearly shown that the measurement noise covariance can be estimated reliably by the adaptive UKF methods and hence improve the performance of the alignment. Full article
(This article belongs to the Special Issue New Trends towards Automatic Vehicle Control and Perception Systems)
Show Figures

1569 KiB  
Article
About Non-Line-Of-Sight Satellite Detection and Exclusion in a 3D Map-Aided Localization Algorithm
by Sébastien Peyraud, David Bétaille, Stéphane Renault, Miguel Ortiz, Florian Mougel, Dominique Meizel and François Peyret
Sensors 2013, 13(1), 829-847; https://doi.org/10.3390/s130100829 - 11 Jan 2013
Cited by 134 | Viewed by 9640
Abstract
Reliable GPS positioning in city environment is a key issue: actually, signals are prone to multipath, with poor satellite geometry in many streets. Using a 3D urban model to forecast satellite visibility in urban contexts in order to improve GPS localization is the [...] Read more.
Reliable GPS positioning in city environment is a key issue: actually, signals are prone to multipath, with poor satellite geometry in many streets. Using a 3D urban model to forecast satellite visibility in urban contexts in order to improve GPS localization is the main topic of the present article. A virtual image processing that detects and eliminates possible faulty measurements is the core of this method. This image is generated using the position estimated a priori by the navigation process itself, under road constraints. This position is then updated by measurements to line-of-sight satellites only. This closed-loop real-time processing has shown very first promising full-scale test results. Full article
(This article belongs to the Special Issue New Trends towards Automatic Vehicle Control and Perception Systems)
Show Figures

Graphical abstract

4359 KiB  
Article
Cross-Coupled Control for All-Terrain Rovers
by Giulio Reina
Sensors 2013, 13(1), 785-800; https://doi.org/10.3390/s130100785 - 8 Jan 2013
Cited by 13 | Viewed by 8505
Abstract
Mobile robots are increasingly being used in challenging outdoor environments for applications that include construction, mining, agriculture, military and planetary exploration. In order to accomplish the planned task, it is critical that the motion control system ensure accuracy and robustness. The achievement of [...] Read more.
Mobile robots are increasingly being used in challenging outdoor environments for applications that include construction, mining, agriculture, military and planetary exploration. In order to accomplish the planned task, it is critical that the motion control system ensure accuracy and robustness. The achievement of high performance on rough terrain is tightly connected with the minimization of vehicle-terrain dynamics effects such as slipping and skidding. This paper presents a cross-coupled controller for a 4-wheel-drive/4-wheel-steer robot, which optimizes the wheel motors’ control algorithm to reduce synchronization errors that would otherwise result in wheel slip with conventional controllers. Experimental results, obtained with an all-terrain rover operating on agricultural terrain, are presented to validate the system. It is shown that the proposed approach is effective in reducing slippage and vehicle posture errors. Full article
(This article belongs to the Special Issue New Trends towards Automatic Vehicle Control and Perception Systems)
Show Figures

Graphical abstract

1651 KiB  
Article
A Smartphone-Based Driver Safety Monitoring System Using Data Fusion
by Boon-Giin Lee and Wan-Young Chung
Sensors 2012, 12(12), 17536-17552; https://doi.org/10.3390/s121217536 - 17 Dec 2012
Cited by 112 | Viewed by 20571
Abstract
This paper proposes a method for monitoring driver safety levels using a data fusion approach based on several discrete data types: eye features, bio-signal variation, in-vehicle temperature, and vehicle speed. The driver safety monitoring system was developed in practice in the form of [...] Read more.
This paper proposes a method for monitoring driver safety levels using a data fusion approach based on several discrete data types: eye features, bio-signal variation, in-vehicle temperature, and vehicle speed. The driver safety monitoring system was developed in practice in the form of an application for an Android-based smartphone device, where measuring safety-related data requires no extra monetary expenditure or equipment. Moreover, the system provides high resolution and flexibility. The safety monitoring process involves the fusion of attributes gathered from different sensors, including video, electrocardiography, photoplethysmography, temperature, and a three-axis accelerometer, that are assigned as input variables to an inference analysis framework. A Fuzzy Bayesian framework is designed to indicate the driver’s capability level and is updated continuously in real-time. The sensory data are transmitted via Bluetooth communication to the smartphone device. A fake incoming call warning service alerts the driver if his or her safety level is suspiciously compromised. Realistic testing of the system demonstrates the practical benefits of multiple features and their fusion in providing a more authentic and effective driver safety monitoring. Full article
(This article belongs to the Special Issue New Trends towards Automatic Vehicle Control and Perception Systems)
Show Figures

4214 KiB  
Article
Assisting the Visually Impaired: Obstacle Detection and Warning System by Acoustic Feedback
by Alberto Rodríguez, J. Javier Yebes, Pablo F. Alcantarilla, Luis M. Bergasa, Javier Almazán and Andrés Cela
Sensors 2012, 12(12), 17476-17496; https://doi.org/10.3390/s121217476 - 17 Dec 2012
Cited by 145 | Viewed by 14113
Abstract
The aim of this article is focused on the design of an obstacle detection system for assisting visually impaired people. A dense disparity map is computed from the images of a stereo camera carried by the user. By using the dense disparity map, [...] Read more.
The aim of this article is focused on the design of an obstacle detection system for assisting visually impaired people. A dense disparity map is computed from the images of a stereo camera carried by the user. By using the dense disparity map, potential obstacles can be detected in 3D in indoor and outdoor scenarios. A ground plane estimation algorithm based on RANSAC plus filtering techniques allows the robust detection of the ground in every frame. A polar grid representation is proposed to account for the potential obstacles in the scene. The design is completed with acoustic feedback to assist visually impaired users while approaching obstacles. Beep sounds with different frequencies and repetitions inform the user about the presence of obstacles. Audio bone conducting technology is employed to play these sounds without interrupting the visually impaired user from hearing other important sounds from its local environment. A user study participated by four visually impaired volunteers supports the proposed system. Full article
(This article belongs to the Special Issue New Trends towards Automatic Vehicle Control and Perception Systems)
Show Figures

Graphical abstract

1336 KiB  
Article
Intuitive Terrain Reconstruction Using Height Observation-Based Ground Segmentation and 3D Object Boundary Estimation
by Wei Song, Kyungeun Cho, Kyhyun Um, Chee Sun Won and Sungdae Sim
Sensors 2012, 12(12), 17186-17207; https://doi.org/10.3390/s121217186 - 12 Dec 2012
Cited by 15 | Viewed by 7555
Abstract
Mobile robot operators must make rapid decisions based on information about the robot’s surrounding environment. This means that terrain modeling and photorealistic visualization are required for the remote operation of mobile robots. We have produced a voxel map and textured mesh from the [...] Read more.
Mobile robot operators must make rapid decisions based on information about the robot’s surrounding environment. This means that terrain modeling and photorealistic visualization are required for the remote operation of mobile robots. We have produced a voxel map and textured mesh from the 2D and 3D datasets collected by a robot’s array of sensors, but some upper parts of objects are beyond the sensors’ measurements and these parts are missing in the terrain reconstruction result. This result is an incomplete terrain model. To solve this problem, we present a new ground segmentation method to detect non-ground data in the reconstructed voxel map. Our method uses height histograms to estimate the ground height range, and a Gibbs-Markov random field model to refine the segmentation results. To reconstruct a complete terrain model of the 3D environment, we develop a 3D boundary estimation method for non-ground objects. We apply a boundary detection technique to the 2D image, before estimating and refining the actual height values of the non-ground vertices in the reconstructed textured mesh. Our proposed methods were tested in an outdoor environment in which trees and buildings were not completely sensed. Our results show that the time required for ground segmentation is faster than that for data sensing, which is necessary for a real-time approach. In addition, those parts of objects that were not sensed are accurately recovered to retrieve their real-world appearances. Full article
(This article belongs to the Special Issue New Trends towards Automatic Vehicle Control and Perception Systems)
Show Figures

1894 KiB  
Article
Context-Aided Sensor Fusion for Enhanced Urban Navigation
by Enrique David Martí, David Martín, Jesús García, Arturo De la Escalera, José Manuel Molina and José María Armingol
Sensors 2012, 12(12), 16802-16837; https://doi.org/10.3390/s121216802 - 6 Dec 2012
Cited by 44 | Viewed by 16224
Abstract
The deployment of Intelligent Vehicles in urban environments requires reliable estimation of positioning for urban navigation. The inherent complexity of this kind of environments fosters the development of novel systems which should provide reliable and precise solutions to the vehicle. This article details [...] Read more.
The deployment of Intelligent Vehicles in urban environments requires reliable estimation of positioning for urban navigation. The inherent complexity of this kind of environments fosters the development of novel systems which should provide reliable and precise solutions to the vehicle. This article details an advanced GNSS/IMU fusion system based on a context-aided Unscented Kalman filter for navigation in urban conditions. The constrained non-linear filter is here conditioned by a contextual knowledge module which reasons about sensor quality and driving context in order to adapt it to the situation, while at the same time it carries out a continuous estimation and correction of INS drift errors. An exhaustive analysis has been carried out with available data in order to characterize the behavior of available sensors and take it into account in the developed solution. The performance is then analyzed with an extensive dataset containing representative situations. The proposed solution suits the use of fusion algorithms for deploying Intelligent Transport Systems in urban environments. Full article
(This article belongs to the Special Issue New Trends towards Automatic Vehicle Control and Perception Systems)
Show Figures

Graphical abstract

960 KiB  
Article
Autonomous Manoeuvring Systems for Collision Avoidance on Single Carriageway Roads
by Felipe Jiménez, José Eugenio Naranjo and Óscar Gómez
Sensors 2012, 12(12), 16498-16521; https://doi.org/10.3390/s121216498 - 29 Nov 2012
Cited by 21 | Viewed by 14881
Abstract
The accurate perception of the surroundings of a vehicle has been the subject of study of numerous automotive researchers for many years. Although several projects in this area have been successfully completed, very few prototypes have actually been industrialized and installed in mass [...] Read more.
The accurate perception of the surroundings of a vehicle has been the subject of study of numerous automotive researchers for many years. Although several projects in this area have been successfully completed, very few prototypes have actually been industrialized and installed in mass produced cars. This indicates that these research efforts must continue in order to improve the present systems. Moreover, the trend to include communication systems in vehicles extends the potential of these perception systems transmitting their information via wireless to other vehicles that may be affected by the surveyed environment. In this paper we present a forward collision warning system based on a laser scanner that is able to detect several potential danger situations. Decision algorithms try to determine the most convenient manoeuvre when evaluating the obstacles’ positions and speeds, road geometry, etc. Once detected, the presented system can act on the actuators of the ego-vehicle as well as transmit this information to other vehicles circulating in the same area using vehicle-to-vehicle communications. The system has been tested for overtaking manoeuvres under different scenarios and the correct actions have been performed. Full article
(This article belongs to the Special Issue New Trends towards Automatic Vehicle Control and Perception Systems)
Show Figures

Graphical abstract

951 KiB  
Article
Analysis of Continuous Steering Movement Using a Motor-Based Quantification System
by Hsin-Min Lee, Ping-Chia Li, Shyi-Kuen Wu and Jia-Yuan You
Sensors 2012, 12(12), 16008-16023; https://doi.org/10.3390/s121216008 - 22 Nov 2012
Cited by 4 | Viewed by 11509
Abstract
Continuous steering movement (CSM) of the upper extremity (UE) is an essential component of steering movement during vehicle driving. This study presents an integrated approach to examine the force exertion and movement pattern during CSM. We utilized a concept similar to the isokinetic [...] Read more.
Continuous steering movement (CSM) of the upper extremity (UE) is an essential component of steering movement during vehicle driving. This study presents an integrated approach to examine the force exertion and movement pattern during CSM. We utilized a concept similar to the isokinetic dynamometer to measure the torque profiles during 180°/s constant-velocity CSM. During a steering cycle, the extremity movement can be divided into stance and swing phases based upon the hand contact information measured from the hand switch devices. Data from twelve normal young adults (six males and six females) showed that there are three typical profiles of force exertion. The two hands exhibit similar time expenditures but with asymmetric force exertions and contact times in both the clockwise (CW) and counterclockwise (CCW) steering cycles. Both hands contribute more force but with less contact time in their outward CSM directions (i.e., CW for the right hand and CCW for the left hand). These findings help us to further understand CSM and have a number of important implications for future practice in clinical training. Considerably more research is required to determine the roles of the various shoulder muscles during CSM at various speeds. Full article
(This article belongs to the Special Issue New Trends towards Automatic Vehicle Control and Perception Systems)
Show Figures

Graphical abstract

1424 KiB  
Article
Vehicle Dynamic Prediction Systems with On-Line Identification of Vehicle Parameters and Road Conditions
by Ling-Yuan Hsu and Tsung-Lin Chen
Sensors 2012, 12(11), 15778-15800; https://doi.org/10.3390/s121115778 - 13 Nov 2012
Cited by 19 | Viewed by 9113
Abstract
This paper presents a vehicle dynamics prediction system, which consists of a sensor fusion system and a vehicle parameter identification system. This sensor fusion system can obtain the six degree-of-freedom vehicle dynamics and two road angles withoutusing a vehicle model. The vehicle parameter [...] Read more.
This paper presents a vehicle dynamics prediction system, which consists of a sensor fusion system and a vehicle parameter identification system. This sensor fusion system can obtain the six degree-of-freedom vehicle dynamics and two road angles withoutusing a vehicle model. The vehicle parameter identification system uses the vehicle dynamics from the sensor fusion system to identify ten vehicle parameters in real time, includingvehicle mass, moment of inertial, and road friction coefficients. With above two systems, the future vehicle dynamics is predicted by using a vehicle dynamics model, obtained from the parameter identification system, to propagate with time the current vehicle state values, obtained from the sensor fusion system. Comparing with most existing literatures in this field, the proposed approach improves the prediction accuracy both by incorporating more vehicle dynamics to the prediction system and by on-line identification to minimize the vehicle modeling errors. Simulation results show that the proposed method successfully predicts the vehicle dynamics in a left-hand turn event and a rollover event. The prediction inaccuracy is 0.51% in a left-hand turn event and 27.3% in a rollover event. Full article
(This article belongs to the Special Issue New Trends towards Automatic Vehicle Control and Perception Systems)
Show Figures

499 KiB  
Article
Combination and Selection of Traffic Safety Expert Judgments for the Prevention of Driving Risks
by Enrique Cabello, Cristina Conde, Isaac Martín de Diego, Javier M. Moguerza and Andrés Redchuk
Sensors 2012, 12(11), 14711-14729; https://doi.org/10.3390/s121114711 - 2 Nov 2012
Cited by 5 | Viewed by 6784
Abstract
In this paper, we describe a new framework to combine experts’ judgments for the prevention of driving risks in a cabin truck. In addition, the methodology shows how to choose among the experts the one whose predictions fit best the environmental conditions. The [...] Read more.
In this paper, we describe a new framework to combine experts’ judgments for the prevention of driving risks in a cabin truck. In addition, the methodology shows how to choose among the experts the one whose predictions fit best the environmental conditions. The methodology is applied over data sets obtained from a high immersive cabin truck simulator in natural driving conditions. A nonparametric model, based in Nearest Neighbors combined with Restricted Least Squared methods is developed. Three experts were asked to evaluate the driving risk using a Visual Analog Scale (VAS), in order to measure the driving risk in a truck simulator where the vehicle dynamics factors were stored. Numerical results show that the methodology is suitable for embedding in real time systems. Full article
(This article belongs to the Special Issue New Trends towards Automatic Vehicle Control and Perception Systems)
Show Figures

3179 KiB  
Article
iParking: An Intelligent Indoor Location-Based Smartphone Parking Service
by Jingbin Liu, Ruizhi Chen, Yuwei Chen, Ling Pei and Liang Chen
Sensors 2012, 12(11), 14612-14629; https://doi.org/10.3390/s121114612 - 31 Oct 2012
Cited by 53 | Viewed by 16863
Abstract
Indoor positioning technologies have been widely studied with a number of solutions being proposed, yet substantial applications and services are still fairly primitive. Taking advantage of the emerging concept of the connected car, the popularity of smartphones and mobile Internet, and precise indoor [...] Read more.
Indoor positioning technologies have been widely studied with a number of solutions being proposed, yet substantial applications and services are still fairly primitive. Taking advantage of the emerging concept of the connected car, the popularity of smartphones and mobile Internet, and precise indoor locations, this study presents the development of a novel intelligent parking service called iParking. With the iParking service, multiple parties such as users, parking facilities and service providers are connected through Internet in a distributed architecture. The client software is a light-weight application running on a smartphone, and it works essentially based on a precise indoor positioning solution, which fuses Wireless Local Area Network (WLAN) signals and the measurements of the built-in sensors of the smartphones. The positioning accuracy, availability and reliability of the proposed positioning solution are adequate for facilitating the novel parking service. An iParking prototype has been developed and demonstrated in a real parking environment at a shopping mall. The demonstration showed how the iParking service could improve the parking experience and increase the efficiency of parking facilities. The iParking is a novel service in terms of cost- and energy-efficient solution. Full article
(This article belongs to the Special Issue New Trends towards Automatic Vehicle Control and Perception Systems)
Show Figures

1079 KiB  
Article
A Two-Layers Based Approach of an Enhanced-Mapfor Urban Positioning Support
by Carolina Piñana-Díaz, Rafael Toledo-Moreo, F. Javier Toledo-Moreo and Antonio Skarmeta
Sensors 2012, 12(11), 14508-14524; https://doi.org/10.3390/s121114508 - 29 Oct 2012
Cited by 7 | Viewed by 6315
Abstract
This paper presents a two-layer based enhanced map that can support navigationin urban environments. One layer is dedicated to describe the drivable road with a specialfocus on the accurate description of its bounds. This feature can support positioning andadvanced map-matching when compared with [...] Read more.
This paper presents a two-layer based enhanced map that can support navigationin urban environments. One layer is dedicated to describe the drivable road with a specialfocus on the accurate description of its bounds. This feature can support positioning andadvanced map-matching when compared with standard polyline-based maps. The otherlayer depicts building heights and locations, thus enabling the detection of non-line-of-sightsignals coming from GPS satellites not in direct view. Both the concept and the methodologyfor creating these enhanced maps are shown in the paper. Full article
(This article belongs to the Special Issue New Trends towards Automatic Vehicle Control and Perception Systems)
Show Figures

Graphical abstract

948 KiB  
Article
Laser-Based Pedestrian Tracking in Outdoor Environments by Multiple Mobile Robots
by Masataka Ozaki, Kei Kakimuma, Masafumi Hashimoto and Kazuhiko Takahashi
Sensors 2012, 12(11), 14489-14507; https://doi.org/10.3390/s121114489 - 29 Oct 2012
Cited by 29 | Viewed by 7458
Abstract
This paper presents an outdoors laser-based pedestrian tracking system using a group of mobile robots located near each other. Each robot detects pedestrians from its own laser scan image using an occupancy-grid-based method, and the robot tracks the detected pedestrians via Kalman filtering [...] Read more.
This paper presents an outdoors laser-based pedestrian tracking system using a group of mobile robots located near each other. Each robot detects pedestrians from its own laser scan image using an occupancy-grid-based method, and the robot tracks the detected pedestrians via Kalman filtering and global-nearest-neighbor (GNN)-based data association. The tracking data is broadcast to multiple robots through intercommunication and is combined using the covariance intersection (CI) method. For pedestrian tracking, each robot identifies its own posture using real-time-kinematic GPS (RTK-GPS) and laser scan matching. Using our cooperative tracking method, all the robots share the tracking data with each other; hence, individual robots can always recognize pedestrians that are invisible to any other robot. The simulation and experimental results show that cooperating tracking provides the tracking performance better than conventional individual tracking does. Our tracking system functions in a decentralized manner without any central server, and therefore, this provides a degree of scalability and robustness that cannot be achieved by conventional centralized architectures. Full article
(This article belongs to the Special Issue New Trends towards Automatic Vehicle Control and Perception Systems)
Show Figures

751 KiB  
Article
Reliable Freestanding Position-Based Routing in Highway Scenarios
by Gabriel A. Galaviz-Mosqueda, Raúl Aquino-Santos, Salvador Villarreal-Reyes, Raúl Rivera-Rodríguez, Luis Villaseñor-González and Arthur Edwards
Sensors 2012, 12(11), 14262-14291; https://doi.org/10.3390/s121114262 - 24 Oct 2012
Cited by 20 | Viewed by 8257
Abstract
Vehicular Ad Hoc Networks (VANETs) are considered by car manufacturers and the research community as the enabling technology to radically improve the safety, efficiency and comfort of everyday driving. However, before VANET technology can fulfill all its expected potential, several difficulties must be [...] Read more.
Vehicular Ad Hoc Networks (VANETs) are considered by car manufacturers and the research community as the enabling technology to radically improve the safety, efficiency and comfort of everyday driving. However, before VANET technology can fulfill all its expected potential, several difficulties must be addressed. One key issue arising when working with VANETs is the complexity of the networking protocols compared to those used by traditional infrastructure networks. Therefore, proper design of the routing strategy becomes a main issue for the effective deployment of VANETs. In this paper, a reliable freestanding position-based routing algorithm (FPBR) for highway scenarios is proposed. For this scenario, several important issues such as the high mobility of vehicles and the propagation conditions may affect the performance of the routing strategy. These constraints have only been partially addressed in previous proposals. In contrast, the design approach used for developing FPBR considered the constraints imposed by a highway scenario and implements mechanisms to overcome them. FPBR performance is compared to one of the leading protocols for highway scenarios. Performance metrics show that FPBR yields similar results when considering freespace propagation conditions, and outperforms the leading protocol when considering a realistic highway path loss model. Full article
(This article belongs to the Special Issue New Trends towards Automatic Vehicle Control and Perception Systems)
Show Figures

403 KiB  
Article
An Adaptive Altitude Information Fusion Method for Autonomous Landing Processes of Small Unmanned Aerial Rotorcraft
by Xusheng Lei and Jingjing Li
Sensors 2012, 12(10), 13212-13224; https://doi.org/10.3390/s121013212 - 27 Sep 2012
Cited by 16 | Viewed by 6612
Abstract
This paper presents an adaptive information fusion method to improve the accuracy and reliability of the altitude measurement information for small unmanned aerial rotorcraft during the landing process. Focusing on the low measurement performance of sensors mounted on small unmanned aerial rotorcraft, a [...] Read more.
This paper presents an adaptive information fusion method to improve the accuracy and reliability of the altitude measurement information for small unmanned aerial rotorcraft during the landing process. Focusing on the low measurement performance of sensors mounted on small unmanned aerial rotorcraft, a wavelet filter is applied as a pre-filter to attenuate the high frequency noises in the sensor output. Furthermore, to improve altitude information, an adaptive extended Kalman filter based on a maximum a posteriori criterion is proposed to estimate measurement noise covariance matrix in real time. Finally, the effectiveness of the proposed method is proved by static tests, hovering flight and autonomous landing flight tests. Full article
(This article belongs to the Special Issue New Trends towards Automatic Vehicle Control and Perception Systems)
Show Figures

1651 KiB  
Article
A Behavior-Based Strategy for Single and Multi-Robot Autonomous Exploration
by Jesús S. Cepeda, Luiz Chaimowicz, Rogelio Soto, José L. Gordillo, Edén A. Alanís-Reyes and Luis C. Carrillo-Arce
Sensors 2012, 12(9), 12772-12797; https://doi.org/10.3390/s120912772 - 18 Sep 2012
Cited by 25 | Viewed by 10188
Abstract
In this paper, we consider the problem of autonomous exploration of unknown environments with single and multiple robots. This is a challenging task, with several potential applications. We propose a simple yet effective approach that combines a behavior-based navigation with an efficient data [...] Read more.
In this paper, we consider the problem of autonomous exploration of unknown environments with single and multiple robots. This is a challenging task, with several potential applications. We propose a simple yet effective approach that combines a behavior-based navigation with an efficient data structure to store previously visited regions. This allows robots to safely navigate, disperse and efficiently explore the environment. A series of experiments performed using a realistic robotic simulator and a real testbed scenario demonstrate that our technique effectively distributes the robots over the environment and allows them to quickly accomplish their mission in large open spaces, narrow cluttered environments, dead-end corridors, as well as rooms with minimum exits. Full article
(This article belongs to the Special Issue New Trends towards Automatic Vehicle Control and Perception Systems)
Show Figures

Graphical abstract

2163 KiB  
Article
Design of a Multi-Sensor Cooperation Travel Environment Perception System for Autonomous Vehicle
by Long Chen, Qingquan Li, Ming Li, Liang Zhang and Qingzhou Mao
Sensors 2012, 12(9), 12386-12404; https://doi.org/10.3390/s120912386 - 12 Sep 2012
Cited by 28 | Viewed by 9438
Abstract
This paper describes the environment perception system designed for intelligent vehicle SmartV-II, which won the 2010 Future Challenge. This system utilizes the cooperation of multiple lasers and cameras to realize several necessary functions of autonomous navigation: road curb detection, lane detection and traffic [...] Read more.
This paper describes the environment perception system designed for intelligent vehicle SmartV-II, which won the 2010 Future Challenge. This system utilizes the cooperation of multiple lasers and cameras to realize several necessary functions of autonomous navigation: road curb detection, lane detection and traffic sign recognition. Multiple single scan lasers are integrated to detect the road curb based on Z-variance method. Vision based lane detection is realized by two scans method combining with image model. Haar-like feature based method is applied for traffic sign detection and SURF matching method is used for sign classification. The results of experiments validate the effectiveness of the proposed algorithms and the whole system. Full article
(This article belongs to the Special Issue New Trends towards Automatic Vehicle Control and Perception Systems)
Show Figures

528 KiB  
Article
Recognition Stage for a Speed Supervisor Based on Road Sign Detection
by Juan-Pablo Carrasco, Arturo de la Escalera De la Escalera and José María Armingol
Sensors 2012, 12(9), 12153-12168; https://doi.org/10.3390/s120912153 - 5 Sep 2012
Cited by 15 | Viewed by 7628
Abstract
Traffic accidents are still one of the main health problems in the World. A number of measures have been applied in order to reduce the number of injuries and fatalities in roads, i.e., implementation of Advanced Driver Assistance Systems (ADAS) based on [...] Read more.
Traffic accidents are still one of the main health problems in the World. A number of measures have been applied in order to reduce the number of injuries and fatalities in roads, i.e., implementation of Advanced Driver Assistance Systems (ADAS) based on image processing. In this paper, a real time speed supervisor based on road sign recognition that can work both in urban and non-urban environments is presented. The system is able to recognize 135 road signs, belonging to the danger, yield, prohibition obligation and indication types, and sends warning messages to the driver upon the combination of two pieces of information: the current speed of the car and the road sign symbol. The core of this paper is the comparison between the two main methods which have been traditionally used for detection and recognition of road signs: template matching (TM) and neural networks (NN). The advantages and disadvantages of the two approaches will be shown and commented. Additionally we will show how the use of well-known algorithms to avoid illumination issues reduces the amount of images needed to train a neural network. Full article
(This article belongs to the Special Issue New Trends towards Automatic Vehicle Control and Perception Systems)
Show Figures

Graphical abstract

607 KiB  
Article
Intelligent Urban Public Transportation for Accessibility Dedicated to People with Disabilities
by Haiying Zhou, Kun-Mean Hou, Decheng Zuo and Jian Li
Sensors 2012, 12(8), 10678-10692; https://doi.org/10.3390/s120810678 - 6 Aug 2012
Cited by 40 | Viewed by 13068
Abstract
The traditional urban public transport system generally cannot provide an effective access service for people with disabilities, especially for disabled, wheelchair and blind (DWB) passengers. In this paper, based on advanced information & communication technologies (ICT) and green technologies (GT) concepts, a dedicated [...] Read more.
The traditional urban public transport system generally cannot provide an effective access service for people with disabilities, especially for disabled, wheelchair and blind (DWB) passengers. In this paper, based on advanced information & communication technologies (ICT) and green technologies (GT) concepts, a dedicated public urban transportation service access system named Mobi+ has been introduced, which facilitates the mobility of DWB passengers. The Mobi+ project consists of three subsystems: a wireless communication subsystem, which provides the data exchange and network connection services between buses and stations in the complex urban environments; the bus subsystem, which provides the DWB class detection & bus arrival notification services; and the station subsystem, which implements the urban environmental surveillance & bus auxiliary access services. The Mobi+ card that supports multi-microcontroller multi-transceiver adopts the fault-tolerant component-based hardware architecture, in which the dedicated embedded system software, i.e., operating system micro-kernel and wireless protocol, has been integrated. The dedicated Mobi+ embedded system provides the fault-tolerant resource awareness communication and scheduling mechanism to ensure the reliability in data exchange and service provision. At present, the Mobi+ system has been implemented on the buses and stations of line ‘2’ in the city of Clermont-Ferrand (France). The experiential results show that, on one hand the Mobi+ prototype system reaches the design expectations and provides an effective urban bus access service for people with disabilities; on the other hand the Mobi+ system is easily to deploy in the buses and at bus stations thanks to its low energy consumption and small form factor. Full article
(This article belongs to the Special Issue New Trends towards Automatic Vehicle Control and Perception Systems)
Show Figures

2047 KiB  
Article
AUV SLAM and Experiments Using a Mechanical Scanning Forward-Looking Sonar
by Bo He, Yan Liang, Xiao Feng, Rui Nian, Tianhong Yan, Minghui Li and Shujing Zhang
Sensors 2012, 12(7), 9386-9410; https://doi.org/10.3390/s120709386 - 9 Jul 2012
Cited by 53 | Viewed by 10766
Abstract
Navigation technology is one of the most important challenges in the applications of autonomous underwater vehicles (AUVs) which navigate in the complex undersea environment. The ability of localizing a robot and accurately mapping its surroundings simultaneously, namely the simultaneous localization and mapping (SLAM) [...] Read more.
Navigation technology is one of the most important challenges in the applications of autonomous underwater vehicles (AUVs) which navigate in the complex undersea environment. The ability of localizing a robot and accurately mapping its surroundings simultaneously, namely the simultaneous localization and mapping (SLAM) problem, is a key prerequisite of truly autonomous robots. In this paper, a modified-FastSLAM algorithm is proposed and used in the navigation for our C-Ranger research platform, an open-frame AUV. A mechanical scanning imaging sonar is chosen as the active sensor for the AUV. The modified-FastSLAM implements the update relying on the on-board sensors of C-Ranger. On the other hand, the algorithm employs the data association which combines the single particle maximum likelihood method with modified negative evidence method, and uses the rank-based resampling to overcome the particle depletion problem. In order to verify the feasibility of the proposed methods, both simulation experiments and sea trials for C-Ranger are conducted. The experimental results show the modified-FastSLAM employed for the navigation of the C-Ranger AUV is much more effective and accurate compared with the traditional methods. Full article
(This article belongs to the Special Issue New Trends towards Automatic Vehicle Control and Perception Systems)
Show Figures

298 KiB  
Article
Observability Analysis of a Matrix Kalman Filter-Based Navigation System Using Visual/Inertial/Magnetic Sensors
by Guohu Feng, Wenqi Wu and Jinling Wang
Sensors 2012, 12(7), 8877-8894; https://doi.org/10.3390/s120708877 - 27 Jun 2012
Cited by 18 | Viewed by 7342
Abstract
A matrix Kalman filter (MKF) has been implemented for an integrated navigation system using visual/inertial/magnetic sensors. The MKF rearranges the original nonlinear process model in a pseudo-linear process model. We employ the observability rank criterion based on Lie derivatives to verify the conditions [...] Read more.
A matrix Kalman filter (MKF) has been implemented for an integrated navigation system using visual/inertial/magnetic sensors. The MKF rearranges the original nonlinear process model in a pseudo-linear process model. We employ the observability rank criterion based on Lie derivatives to verify the conditions under which the nonlinear system is observable. It has been proved that such observability conditions are: (a) at least one degree of rotational freedom is excited, and (b) at least two linearly independent horizontal lines and one vertical line are observed. Experimental results have validated the correctness of these observability conditions. Full article
(This article belongs to the Special Issue New Trends towards Automatic Vehicle Control and Perception Systems)
Show Figures

1856 KiB  
Article
Enhancing Positioning Accuracy in Urban Terrain by Fusing Data from a GPS Receiver, Inertial Sensors, Stereo-Camera and Digital Maps for Pedestrian Navigation
by Przemyslaw Baranski and Pawel Strumillo
Sensors 2012, 12(6), 6764-6801; https://doi.org/10.3390/s120606764 - 25 May 2012
Cited by 25 | Viewed by 9640
Abstract
The paper presents an algorithm for estimating a pedestrian location in an urban environment. The algorithm is based on the particle filter and uses different data sources: a GPS receiver, inertial sensors, probability maps and a stereo camera. Inertial sensors are used to [...] Read more.
The paper presents an algorithm for estimating a pedestrian location in an urban environment. The algorithm is based on the particle filter and uses different data sources: a GPS receiver, inertial sensors, probability maps and a stereo camera. Inertial sensors are used to estimate a relative displacement of a pedestrian. A gyroscope estimates a change in the heading direction. An accelerometer is used to count a pedestrian’s steps and their lengths. The so-called probability maps help to limit GPS inaccuracy by imposing constraints on pedestrian kinematics, e.g., it is assumed that a pedestrian cannot cross buildings, fences etc. This limits position inaccuracy to ca. 10 m. Incorporation of depth estimates derived from a stereo camera that are compared to the 3D model of an environment has enabled further reduction of positioning errors. As a result, for 90% of the time, the algorithm is able to estimate a pedestrian location with an error smaller than 2 m, compared to an error of 6.5 m for a navigation based solely on GPS. Full article
(This article belongs to the Special Issue New Trends towards Automatic Vehicle Control and Perception Systems)
Show Figures

Graphical abstract

Back to TopTop