E-Mail Alert

Add your e-mail address to receive forthcoming issues of this journal:

Journal Browser

Journal Browser

Special Issue "New Trends towards Automatic Vehicle Control and Perception Systems"

Quicklinks

A special issue of Sensors (ISSN 1424-8220). This special issue belongs to the section "Physical Sensors".

Deadline for manuscript submissions: closed (15 October 2012)

Special Issue Editors

Guest Editor
Dr. Vicente Milanés

ME/Fulbright Visiting Scholar at California PATH, University of California, Berkeley, CA 94720, USA
Website | E-Mail
Interests: autonomous vehicles; fuzzy logic control; intelligent traffic and transport infrastructure and vehicle-infrastructure cooperation
Guest Editor
Prof. Dr. Luis Miguel Bergasa

Department of Electronics, Polytechnic School. University Campus, 28805 Alcalá de Henares, Madrid, Spain
Website | E-Mail
Interests: computer vision; perception systems; advanced driver assistance systems; assistant robotics; simultaneous localization and mapping

Special Issue Information

Dear Colleagues,

The growth in the number of drivers in the last decade, and consequently in the number of vehicles, has caused that congestion has became a major concern in the road transportation; specifically, in urban areas. New policies on the part of the Governments are focused on the development of a safer, more secure and more efficient transportation systems. In this connection, one of the most important topics in the road transportation field is the development of intelligent devices and systems for improving both traffic flow and safety. New trends on perception systems can contribute to these goals as well as the development of Service Robots in an effort to increase the quality of living of citizens in metropolitan areas.

This special issue aims to disseminate recent advances on in-car sensors in order to develop advanced vehicle control systems towards a more sustainability transport. Novel theoretical approaches or practical applications of in-car sensors toward the design, development and implementation of intelligent vehicles in real and complex environments are welcomed. On the other hand, this special issue is looking for original research works on perception systems in robotics, intelligent vehicles and driver assistance systems. Contributions from Workshops in IEEE Intelligent Vehicle Symposium 2012 (IEEE IV 2012), http://www.robesafe.es/iv2012 with extended results are also welcomed.

Topics of interest of this special issue include, but are not limited to, the following topics:

  • sensors integration for positioning system
  • real-time motion planning
  • autonomous navigation systems
  • intelligent maneuvering in complex environments
  • obstacle avoidance
  • traffic road detection
  • off-road guidance
  • parking aid systems
  • environment perception
  • road safety applications
  • unmanned vehicles
  • collision prediction and mitigation
  • driver assistance systems
  • safety systems
  • simultaneous localization and mapping
  • 3D reconstruction
  • cooperative perception
  • new sensing devices
  • human-robot interaction

Dr. Vicente Milanés
Prof. Dr. Luis M. Bergasa
Guest Editors

Keywords

  • road vehicle control
  • intelligent systems
  • multi-sensor fusion
  • autonomous navigation
  • advanced perception systems
  • accurate world modeling
  • simultaneous localization and mapping
  • 3D reconstruction
  • cooperative perception
  • new sensing devices
  • human-robot interaction

Published Papers (34 papers)

View options order results:
result details:
Displaying articles 1-34
Export citation of selected articles as:

Editorial

Jump to: Research

Open AccessEditorial Introduction to the Special Issue on “New Trends towards Automatic Vehicle Control and Perception Systems”
Sensors 2013, 13(5), 5712-5719; doi:10.3390/s130505712
Received: 24 April 2013 / Revised: 25 April 2013 / Accepted: 27 April 2013 / Published: 2 May 2013
Cited by 3 | PDF Full-text (58 KB) | HTML Full-text | XML Full-text
Abstract
Intelligent and automatic systems are making our daily life easier. They are able to automate tasks that, up to now, were performed by humans, freeing them from these tedious tasks. They are mainly based on the classical robotic architectures where the stages of
[...] Read more.
Intelligent and automatic systems are making our daily life easier. They are able to automate tasks that, up to now, were performed by humans, freeing them from these tedious tasks. They are mainly based on the classical robotic architectures where the stages of perception—using different sensor sources or even a fusion of a set of them—and planning—where intelligent control systems are applied—play a key role. Among all of the fields in which intelligent systems can be applied, transport systems are considered one of the most promising ones since over one million fatalities—including drivers, pedestrians, cyclists and motorcyclists—are registered each year worldwide and they can definitively help to reduce these figures. [...] Full article
(This article belongs to the Special Issue New Trends towards Automatic Vehicle Control and Perception Systems)

Research

Jump to: Editorial

Open AccessArticle Localization and Mapping Using Only a Rotating FMCW Radar Sensor
Sensors 2013, 13(4), 4527-4552; doi:10.3390/s130404527
Received: 28 February 2013 / Revised: 27 March 2013 / Accepted: 3 April 2013 / Published: 8 April 2013
Cited by 5 | PDF Full-text (4028 KB) | HTML Full-text | XML Full-text
Abstract
Rotating radar sensors are perception systems rarely used in mobile robotics. This paper is concerned with the use of a mobile ground-based panoramic radar sensor which is able to deliver both distance and velocity of multiple targets in its surrounding. The consequence of
[...] Read more.
Rotating radar sensors are perception systems rarely used in mobile robotics. This paper is concerned with the use of a mobile ground-based panoramic radar sensor which is able to deliver both distance and velocity of multiple targets in its surrounding. The consequence of using such a sensor in high speed robotics is the appearance of both geometric and Doppler velocity distortions in the collected data. These effects are, in the majority of studies, ignored or considered as noise and then corrected based on proprioceptive sensors or localization systems. Our purpose is to study and use data distortion and Doppler effect as sources of information in order to estimate the vehicle’s displacement. The linear and angular velocities of the mobile robot are estimated by analyzing the distortion of the measurements provided by the panoramic Frequency Modulated Continuous Wave (FMCW) radar, called IMPALA. Without the use of any proprioceptive sensor, these estimates are then used to build the trajectory of the vehicle and the radar map of outdoor environments. In this paper, radar-only localization and mapping results are presented for a ground vehicle moving at high speed. Full article
(This article belongs to the Special Issue New Trends towards Automatic Vehicle Control and Perception Systems)
Open AccessArticle Driver Assistance System for Passive Multi-Trailer Vehicles with Haptic Steering Limitations on the Leading Unit
Sensors 2013, 13(4), 4485-4498; doi:10.3390/s130404485
Received: 4 October 2012 / Revised: 8 March 2013 / Accepted: 26 March 2013 / Published: 3 April 2013
Cited by 4 | PDF Full-text (4083 KB) | HTML Full-text | XML Full-text | Supplementary Files
Abstract
Driving vehicles with one or more passive trailers has difficulties in both forward and backward motion due to inter-unit collisions, jackknife, and lack of visibility. Consequently, advanced driver assistance systems (ADAS) for multi-trailer combinations can be beneficial to accident avoidance as well as
[...] Read more.
Driving vehicles with one or more passive trailers has difficulties in both forward and backward motion due to inter-unit collisions, jackknife, and lack of visibility. Consequently, advanced driver assistance systems (ADAS) for multi-trailer combinations can be beneficial to accident avoidance as well as to driver comfort. The ADAS proposed in this paper aims to prevent unsafe steering commands by means of a haptic handwheel. Furthermore, when driving in reverse, the steering-wheel and pedals can be used as if the vehicle was driven from the back of the last trailer with visual aid from a rear-view camera. This solution, which can be implemented in drive-by-wire vehicles with hitch angle sensors, profits from two methods previously developed by the authors: safe steering by applying a curvature limitation to the leading unit, and a virtual tractor concept for backward motion that includes the complex case of set-point propagation through on-axle hitches. The paper addresses system requirements and provides implementation details to tele-operate two different off- and on-axle combinations of a tracked mobile robot pulling and pushing two dissimilar trailers. Full article
(This article belongs to the Special Issue New Trends towards Automatic Vehicle Control and Perception Systems)
Figures

Open AccessArticle Optical Flow and Driver’s Kinematics Analysis for State of Alert Sensing
Sensors 2013, 13(4), 4225-4257; doi:10.3390/s130404225
Received: 30 January 2013 / Revised: 16 February 2013 / Accepted: 19 February 2013 / Published: 28 March 2013
Cited by 7 | PDF Full-text (2380 KB) | HTML Full-text | XML Full-text
Abstract
Road accident statistics from different countries show that a significant number of accidents occur due to driver’s fatigue and lack of awareness to traffic conditions. In particular, about 60% of the accidents in which long haul truck and bus drivers are involved are
[...] Read more.
Road accident statistics from different countries show that a significant number of accidents occur due to driver’s fatigue and lack of awareness to traffic conditions. In particular, about 60% of the accidents in which long haul truck and bus drivers are involved are attributed to drowsiness and fatigue. It is thus fundamental to improve non-invasive systems for sensing a driver’s state of alert. One of the main challenges to correctly resolve the state of alert is measuring the percentage of eyelid closure over time (PERCLOS), despite the driver’s head and body movements. In this paper, we propose a technique that involves optical flow and driver’s kinematics analysis to improve the robustness of the driver’s alert state measurement under pose changes using a single camera with near-infrared illumination. The proposed approach infers and keeps track of the driver’s pose in 3D space in order to ensure that eyes can be located correctly, even after periods of partial occlusion, for example, when the driver stares away from the camera. Our experiments show the effectiveness of the approach with a correct eyes detection rate of 99.41%, on average. The results obtained with the proposed approach in an experiment involving fifteen persons under different levels of sleep deprivation also confirm the discriminability of the fatigue levels. In addition to the measurement of fatigue and drowsiness, the pose tracking capability of the proposed approach has potential applications in distraction assessment and alerting of machine operators. Full article
(This article belongs to the Special Issue New Trends towards Automatic Vehicle Control and Perception Systems)
Open AccessArticle Robust Lane Sensing and Departure Warning under Shadows and Occlusions
Sensors 2013, 13(3), 3270-3298; doi:10.3390/s130303270
Received: 19 February 2013 / Revised: 2 March 2013 / Accepted: 4 March 2013 / Published: 11 March 2013
Cited by 8 | PDF Full-text (4727 KB) | HTML Full-text | XML Full-text
Abstract
A prerequisite for any system that enhances drivers’ awareness of road conditions and threatening situations is the correct sensing of the road geometry and the vehicle’s relative pose with respect to the lane despite shadows and occlusions. In this paper we propose an
[...] Read more.
A prerequisite for any system that enhances drivers’ awareness of road conditions and threatening situations is the correct sensing of the road geometry and the vehicle’s relative pose with respect to the lane despite shadows and occlusions. In this paper we propose an approach for lane segmentation and tracking that is robust to varying shadows and occlusions. The approach involves color-based clustering, the use of MSAC for outlier removal and curvature estimation, and also the tracking of lane boundaries. Lane boundaries are modeled as planar curves residing in 3D-space using an inverse perspective mapping, instead of the traditional tracking of lanes in the image space, i.e., the segmented lane boundary points are 3D points in a coordinate frame fixed to the vehicle that have a depth component and belong to a plane tangent to the vehicle’s wheels, rather than 2D points in the image space without depth information. The measurement noise and disturbances due to vehicle vibrations are reduced using an extended Kalman filter that involves a 6-DOF motion model for the vehicle, as well as measurements about the road’s banking and slope angles. Additional contributions of the paper include: (i) the comparison of textural features obtained from a bank of Gabor filters and from a GMRF model; and (ii) the experimental validation of the quadratic and cubic approximations to the clothoid model for the lane boundaries. The results show that the proposed approach performs better than the traditional gradient-based approach under different levels of difficulty caused by shadows and occlusions. Full article
(This article belongs to the Special Issue New Trends towards Automatic Vehicle Control and Perception Systems)
Open AccessArticle Dynamic Obstacle Avoidance Using Bayesian Occupancy Filter and Approximate Inference
Sensors 2013, 13(3), 2929-2944; doi:10.3390/s130302929
Received: 31 January 2013 / Revised: 14 February 2013 / Accepted: 16 February 2013 / Published: 1 March 2013
Cited by 5 | PDF Full-text (2478 KB) | HTML Full-text | XML Full-text
Abstract
The goal of this paper is to solve the problem of dynamic obstacle avoidance for a mobile platform using the stochastic optimal control framework to compute paths that are optimal in terms of safety and energy efficiency under constraints. We propose a threedimensional
[...] Read more.
The goal of this paper is to solve the problem of dynamic obstacle avoidance for a mobile platform using the stochastic optimal control framework to compute paths that are optimal in terms of safety and energy efficiency under constraints. We propose a threedimensional extension of the Bayesian Occupancy Filter (BOF) (Cou´e et al. Int. J. Rob. Res. 2006, 25, 19–30) to deal with the noise in the sensor data, improving the perception stage. We reduce the computational cost of the perception stage by estimating the velocity of each obstacle using optical flow tracking and blob filtering. While several obstacle avoidance systems have been presented in the literature addressing safety and optimality of the robot motion separately, we have applied the approximate inference framework to this problem to combine multiple goals, constraints and priors in a structured way. It is important to remark that the problem involves obstacles that can be moving, therefore classical techniques based on reactive control are not optimal from the point of view of energy consumption. Some experimental results, including comparisons against classical algorithms that highlight the advantages, are presented. Full article
(This article belongs to the Special Issue New Trends towards Automatic Vehicle Control and Perception Systems)
Figures

Open AccessArticle Autonomous Docking Based on Infrared System for Electric Vehicle Charging in Urban Areas
Sensors 2013, 13(2), 2645-2663; doi:10.3390/s130202645
Received: 11 December 2012 / Revised: 24 January 2013 / Accepted: 5 February 2013 / Published: 21 February 2013
Cited by 7 | PDF Full-text (1283 KB) | HTML Full-text | XML Full-text
Abstract
Electric vehicles are progressively introduced in urban areas, because of their ability to reduce air pollution, fuel consumption and noise nuisance. Nowadays, some big cities are launching the first electric car-sharing projects to clear traffic jams and enhance urban mobility, as an alternative
[...] Read more.
Electric vehicles are progressively introduced in urban areas, because of their ability to reduce air pollution, fuel consumption and noise nuisance. Nowadays, some big cities are launching the first electric car-sharing projects to clear traffic jams and enhance urban mobility, as an alternative to the classic public transportation systems. However, there are still some problems to be solved related to energy storage, electric charging and autonomy. In this paper, we present an autonomous docking system for electric vehicles recharging based on an embarked infrared camera performing infrared beacons detection installed in the infrastructure. A visual servoing system coupled with an automatic controller allows the vehicle to dock accurately to the recharging booth in a street parking area. The results show good behavior of the implemented system, which is currently deployed as a real prototype system in the city of Paris. Full article
(This article belongs to the Special Issue New Trends towards Automatic Vehicle Control and Perception Systems)
Open AccessArticle Complete Low-Cost Implementation of a Teleoperated Control System for a Humanoid Robot
Sensors 2013, 13(2), 1385-1401; doi:10.3390/s130201385
Received: 30 November 2012 / Revised: 24 December 2012 / Accepted: 17 January 2013 / Published: 24 January 2013
Cited by 8 | PDF Full-text (2174 KB) | HTML Full-text | XML Full-text
Abstract
Humanoid robotics is a field of a great research interest nowadays. This work implements a low-cost teleoperated system to control a humanoid robot, as a first step for further development and study of human motion and walking. A human suit is built, consisting
[...] Read more.
Humanoid robotics is a field of a great research interest nowadays. This work implements a low-cost teleoperated system to control a humanoid robot, as a first step for further development and study of human motion and walking. A human suit is built, consisting of 8 sensors, 6 resistive linear potentiometers on the lower extremities and 2 digital accelerometers for the arms. The goal is to replicate the suit movements in a small humanoid robot. The data from the sensors is wirelessly transmitted via two ZigBee RF configurable modules installed on each device: the robot and the suit. Replicating the suit movements requires a robot stability control module to prevent falling down while executing different actions involving knees flexion. This is carried out via a feedback control system with an accelerometer placed on the robot’s back. The measurement from this sensor is filtered using Kalman. In addition, a two input fuzzy algorithm controlling five servo motors regulates the robot balance. The humanoid robot is controlled by a medium capacity processor and a low computational cost is achieved for executing the different algorithms. Both hardware and software of the system are based on open platforms. The successful experiments carried out validate the implementation of the proposed teleoperated system. Full article
(This article belongs to the Special Issue New Trends towards Automatic Vehicle Control and Perception Systems)
Figures

Open AccessArticle Low-Cost MEMS Sensors and Vision System for Motion and Position Estimation of a Scooter
Sensors 2013, 13(2), 1510-1522; doi:10.3390/s130201510
Received: 7 December 2012 / Revised: 18 January 2013 / Accepted: 21 January 2013 / Published: 24 January 2013
Cited by 11 | PDF Full-text (717 KB) | HTML Full-text | XML Full-text
Abstract
The possibility to identify with significant accuracy the position of a vehicle in a mapping reference frame for driving directions and best-route analysis is a topic which is attracting a lot of interest from the research and development sector. To reach the objective
[...] Read more.
The possibility to identify with significant accuracy the position of a vehicle in a mapping reference frame for driving directions and best-route analysis is a topic which is attracting a lot of interest from the research and development sector. To reach the objective of accurate vehicle positioning and integrate response events, it is necessary to estimate position, orientation and velocity of the system with high measurement rates. In this work we test a system which uses low-cost sensors, based on Micro Electro-Mechanical Systems (MEMS) technology, coupled with information derived from a video camera placed on a two-wheel motor vehicle (scooter). In comparison to a four-wheel vehicle; the dynamics of a two-wheel vehicle feature a higher level of complexity given that more degrees of freedom must be taken into account. For example a motorcycle can twist sideways; thus generating a roll angle. A slight pitch angle has to be considered as well; since wheel suspensions have a higher degree of motion compared to four-wheel motor vehicles. In this paper we present a method for the accurate reconstruction of the trajectory of a “Vespa” scooter; which can be used as alternative to the “classical” approach based on GPS/INS sensor integration. Position and orientation of the scooter are obtained by integrating MEMS-based orientation sensor data with digital images through a cascade of a Kalman filter and a Bayesian particle filter. Full article
(This article belongs to the Special Issue New Trends towards Automatic Vehicle Control and Perception Systems)
Open AccessArticle An Aerial-Ground Robotic System for Navigation and Obstacle Mapping in Large Outdoor Areas
Sensors 2013, 13(1), 1247-1267; doi:10.3390/s130101247
Received: 16 October 2012 / Revised: 24 December 2012 / Accepted: 14 January 2013 / Published: 21 January 2013
Cited by 17 | PDF Full-text (2769 KB) | HTML Full-text | XML Full-text
Abstract
There are many outdoor robotic applications where a robot must reach a goal position or explore an area without previous knowledge of the environment around it. Additionally, other applications (like path planning) require the use of known maps or previous information of the
[...] Read more.
There are many outdoor robotic applications where a robot must reach a goal position or explore an area without previous knowledge of the environment around it. Additionally, other applications (like path planning) require the use of known maps or previous information of the environment. This work presents a system composed by a terrestrial and an aerial robot that cooperate and share sensor information in order to address those requirements. The ground robot is able to navigate in an unknown large environment aided by visual feedback from a camera on board the aerial robot. At the same time, the obstacles are mapped in real-time by putting together the information from the camera and the positioning system of the ground robot. A set of experiments were carried out with the purpose of verifying the system applicability. The experiments were performed in a simulation environment and outdoor with a medium-sized ground robot and a mini quad-rotor. The proposed robotic system shows outstanding results in simultaneous navigation and mapping applications in large outdoor environments. Full article
(This article belongs to the Special Issue New Trends towards Automatic Vehicle Control and Perception Systems)
Figures

Open AccessArticle Robot Evolutionary Localization Based on Attentive Visual Short-Term Memory
Sensors 2013, 13(1), 1268-1299; doi:10.3390/s130101268
Received: 22 December 2012 / Revised: 14 January 2013 / Accepted: 14 January 2013 / Published: 21 January 2013
Cited by 3 | PDF Full-text (3110 KB) | HTML Full-text | XML Full-text
Abstract
Cameras are one of the most relevant sensors in autonomous robots. However, two of their challenges are to extract useful information from captured images, and to manage the small field of view of regular cameras. This paper proposes implementing a dynamic visual memory
[...] Read more.
Cameras are one of the most relevant sensors in autonomous robots. However, two of their challenges are to extract useful information from captured images, and to manage the small field of view of regular cameras. This paper proposes implementing a dynamic visual memory to store the information gathered from a moving camera on board a robot, followed by an attention system to choose where to look with this mobile camera, and a visual localization algorithm that incorporates this visual memory. The visual memory is a collection of relevant task-oriented objects and 3D segments, and its scope is wider than the current camera field of view. The attention module takes into account the need to reobserve objects in the visual memory and the need to explore new areas. The visual memory is useful also in localization tasks, as it provides more information about robot surroundings than the current instantaneous image. This visual system is intended as underlying technology for service robot applications in real people’s homes. Several experiments have been carried out, both with simulated and real Pioneer and Nao robots, to validate the system and each of its components in office scenarios. Full article
(This article belongs to the Special Issue New Trends towards Automatic Vehicle Control and Perception Systems)
Open AccessArticle A Novel Scheme for DVL-Aided SINS In-Motion Alignment Using UKF Techniques
Sensors 2013, 13(1), 1046-1063; doi:10.3390/s130101046
Received: 7 November 2012 / Revised: 25 December 2012 / Accepted: 5 January 2013 / Published: 15 January 2013
Cited by 26 | PDF Full-text (407 KB) | HTML Full-text | XML Full-text
Abstract
In-motion alignment of Strapdown Inertial Navigation Systems (SINS) without any geodetic-frame observations is one of the toughest challenges for Autonomous Underwater Vehicles (AUV). This paper presents a novel scheme for Doppler Velocity Log (DVL) aided SINS alignment using Unscented Kalman Filter (UKF) which
[...] Read more.
In-motion alignment of Strapdown Inertial Navigation Systems (SINS) without any geodetic-frame observations is one of the toughest challenges for Autonomous Underwater Vehicles (AUV). This paper presents a novel scheme for Doppler Velocity Log (DVL) aided SINS alignment using Unscented Kalman Filter (UKF) which allows large initial misalignments. With the proposed mechanism, a nonlinear SINS error model is presented and the measurement model is derived under the assumption that large misalignments may exist. Since a priori knowledge of the measurement noise covariance is of great importance to robustness of the UKF, the covariance-matching methods widely used in the Adaptive KF (AKF) are extended for use in Adaptive UKF (AUKF). Experimental results show that the proposed DVL-aided alignment model is effective with any initial heading errors. The performances of the adaptive filtering methods are evaluated with regards to their parameter estimation stability. Furthermore, it is clearly shown that the measurement noise covariance can be estimated reliably by the adaptive UKF methods and hence improve the performance of the alignment. Full article
(This article belongs to the Special Issue New Trends towards Automatic Vehicle Control and Perception Systems)
Open AccessArticle About Non-Line-Of-Sight Satellite Detection and Exclusion in a 3D Map-Aided Localization Algorithm
Sensors 2013, 13(1), 829-847; doi:10.3390/s130100829
Received: 10 December 2012 / Revised: 4 January 2013 / Accepted: 4 January 2013 / Published: 11 January 2013
Cited by 31 | PDF Full-text (1569 KB) | HTML Full-text | XML Full-text
Abstract
Reliable GPS positioning in city environment is a key issue: actually, signals are prone to multipath, with poor satellite geometry in many streets. Using a 3D urban model to forecast satellite visibility in urban contexts in order to improve GPS localization is the
[...] Read more.
Reliable GPS positioning in city environment is a key issue: actually, signals are prone to multipath, with poor satellite geometry in many streets. Using a 3D urban model to forecast satellite visibility in urban contexts in order to improve GPS localization is the main topic of the present article. A virtual image processing that detects and eliminates possible faulty measurements is the core of this method. This image is generated using the position estimated a priori by the navigation process itself, under road constraints. This position is then updated by measurements to line-of-sight satellites only. This closed-loop real-time processing has shown very first promising full-scale test results. Full article
(This article belongs to the Special Issue New Trends towards Automatic Vehicle Control and Perception Systems)
Figures

Open AccessArticle Cross-Coupled Control for All-Terrain Rovers
Sensors 2013, 13(1), 785-800; doi:10.3390/s130100785
Received: 28 September 2012 / Revised: 2 January 2013 / Accepted: 4 January 2013 / Published: 8 January 2013
Cited by 3 | PDF Full-text (4359 KB) | HTML Full-text | XML Full-text
Abstract
Mobile robots are increasingly being used in challenging outdoor environments for applications that include construction, mining, agriculture, military and planetary exploration. In order to accomplish the planned task, it is critical that the motion control system ensure accuracy and robustness. The achievement of
[...] Read more.
Mobile robots are increasingly being used in challenging outdoor environments for applications that include construction, mining, agriculture, military and planetary exploration. In order to accomplish the planned task, it is critical that the motion control system ensure accuracy and robustness. The achievement of high performance on rough terrain is tightly connected with the minimization of vehicle-terrain dynamics effects such as slipping and skidding. This paper presents a cross-coupled controller for a 4-wheel-drive/4-wheel-steer robot, which optimizes the wheel motors’ control algorithm to reduce synchronization errors that would otherwise result in wheel slip with conventional controllers. Experimental results, obtained with an all-terrain rover operating on agricultural terrain, are presented to validate the system. It is shown that the proposed approach is effective in reducing slippage and vehicle posture errors. Full article
(This article belongs to the Special Issue New Trends towards Automatic Vehicle Control and Perception Systems)
Figures

Open AccessArticle Assisting the Visually Impaired: Obstacle Detection and Warning System by Acoustic Feedback
Sensors 2012, 12(12), 17476-17496; doi:10.3390/s121217476
Received: 12 October 2012 / Revised: 23 November 2012 / Accepted: 10 December 2012 / Published: 17 December 2012
Cited by 34 | PDF Full-text (4214 KB) | HTML Full-text | XML Full-text
Abstract
The aim of this article is focused on the design of an obstacle detection system for assisting visually impaired people. A dense disparity map is computed from the images of a stereo camera carried by the user. By using the dense disparity map,
[...] Read more.
The aim of this article is focused on the design of an obstacle detection system for assisting visually impaired people. A dense disparity map is computed from the images of a stereo camera carried by the user. By using the dense disparity map, potential obstacles can be detected in 3D in indoor and outdoor scenarios. A ground plane estimation algorithm based on RANSAC plus filtering techniques allows the robust detection of the ground in every frame. A polar grid representation is proposed to account for the potential obstacles in the scene. The design is completed with acoustic feedback to assist visually impaired users while approaching obstacles. Beep sounds with different frequencies and repetitions inform the user about the presence of obstacles. Audio bone conducting technology is employed to play these sounds without interrupting the visually impaired user from hearing other important sounds from its local environment. A user study participated by four visually impaired volunteers supports the proposed system. Full article
(This article belongs to the Special Issue New Trends towards Automatic Vehicle Control and Perception Systems)
Figures

Open AccessArticle A Smartphone-Based Driver Safety Monitoring System Using Data Fusion
Sensors 2012, 12(12), 17536-17552; doi:10.3390/s121217536
Received: 17 October 2012 / Revised: 12 December 2012 / Accepted: 13 December 2012 / Published: 17 December 2012
Cited by 24 | PDF Full-text (1651 KB) | HTML Full-text | XML Full-text
Abstract
This paper proposes a method for monitoring driver safety levels using a data fusion approach based on several discrete data types: eye features, bio-signal variation, in-vehicle temperature, and vehicle speed. The driver safety monitoring system was developed in practice in the form of
[...] Read more.
This paper proposes a method for monitoring driver safety levels using a data fusion approach based on several discrete data types: eye features, bio-signal variation, in-vehicle temperature, and vehicle speed. The driver safety monitoring system was developed in practice in the form of an application for an Android-based smartphone device, where measuring safety-related data requires no extra monetary expenditure or equipment. Moreover, the system provides high resolution and flexibility. The safety monitoring process involves the fusion of attributes gathered from different sensors, including video, electrocardiography, photoplethysmography, temperature, and a three-axis accelerometer, that are assigned as input variables to an inference analysis framework. A Fuzzy Bayesian framework is designed to indicate the driver’s capability level and is updated continuously in real-time. The sensory data are transmitted via Bluetooth communication to the smartphone device. A fake incoming call warning service alerts the driver if his or her safety level is suspiciously compromised. Realistic testing of the system demonstrates the practical benefits of multiple features and their fusion in providing a more authentic and effective driver safety monitoring. Full article
(This article belongs to the Special Issue New Trends towards Automatic Vehicle Control and Perception Systems)
Open AccessArticle Intuitive Terrain Reconstruction Using Height Observation-Based Ground Segmentation and 3D Object Boundary Estimation
Sensors 2012, 12(12), 17186-17207; doi:10.3390/s121217186
Received: 8 October 2012 / Revised: 7 December 2012 / Accepted: 11 December 2012 / Published: 12 December 2012
Cited by 10 | PDF Full-text (1336 KB) | HTML Full-text | XML Full-text
Abstract
Mobile robot operators must make rapid decisions based on information about the robot’s surrounding environment. This means that terrain modeling and photorealistic visualization are required for the remote operation of mobile robots. We have produced a voxel map and textured mesh from the
[...] Read more.
Mobile robot operators must make rapid decisions based on information about the robot’s surrounding environment. This means that terrain modeling and photorealistic visualization are required for the remote operation of mobile robots. We have produced a voxel map and textured mesh from the 2D and 3D datasets collected by a robot’s array of sensors, but some upper parts of objects are beyond the sensors’ measurements and these parts are missing in the terrain reconstruction result. This result is an incomplete terrain model. To solve this problem, we present a new ground segmentation method to detect non-ground data in the reconstructed voxel map. Our method uses height histograms to estimate the ground height range, and a Gibbs-Markov random field model to refine the segmentation results. To reconstruct a complete terrain model of the 3D environment, we develop a 3D boundary estimation method for non-ground objects. We apply a boundary detection technique to the 2D image, before estimating and refining the actual height values of the non-ground vertices in the reconstructed textured mesh. Our proposed methods were tested in an outdoor environment in which trees and buildings were not completely sensed. Our results show that the time required for ground segmentation is faster than that for data sensing, which is necessary for a real-time approach. In addition, those parts of objects that were not sensed are accurately recovered to retrieve their real-world appearances. Full article
(This article belongs to the Special Issue New Trends towards Automatic Vehicle Control and Perception Systems)
Open AccessArticle Context-Aided Sensor Fusion for Enhanced Urban Navigation
Sensors 2012, 12(12), 16802-16837; doi:10.3390/s121216802
Received: 8 October 2012 / Revised: 30 November 2012 / Accepted: 3 December 2012 / Published: 6 December 2012
Cited by 19 | PDF Full-text (1894 KB) | HTML Full-text | XML Full-text
Abstract
The deployment of Intelligent Vehicles in urban environments requires reliable estimation of positioning for urban navigation. The inherent complexity of this kind of environments fosters the development of novel systems which should provide reliable and precise solutions to the vehicle. This article details
[...] Read more.
The deployment of Intelligent Vehicles in urban environments requires reliable estimation of positioning for urban navigation. The inherent complexity of this kind of environments fosters the development of novel systems which should provide reliable and precise solutions to the vehicle. This article details an advanced GNSS/IMU fusion system based on a context-aided Unscented Kalman filter for navigation in urban conditions. The constrained non-linear filter is here conditioned by a contextual knowledge module which reasons about sensor quality and driving context in order to adapt it to the situation, while at the same time it carries out a continuous estimation and correction of INS drift errors. An exhaustive analysis has been carried out with available data in order to characterize the behavior of available sensors and take it into account in the developed solution. The performance is then analyzed with an extensive dataset containing representative situations. The proposed solution suits the use of fusion algorithms for deploying Intelligent Transport Systems in urban environments. Full article
(This article belongs to the Special Issue New Trends towards Automatic Vehicle Control and Perception Systems)
Figures

Open AccessArticle Autonomous Manoeuvring Systems for Collision Avoidance on Single Carriageway Roads
Sensors 2012, 12(12), 16498-16521; doi:10.3390/s121216498
Received: 13 August 2012 / Revised: 19 November 2012 / Accepted: 27 November 2012 / Published: 29 November 2012
Cited by 13 | PDF Full-text (960 KB) | HTML Full-text | XML Full-text
Abstract
The accurate perception of the surroundings of a vehicle has been the subject of study of numerous automotive researchers for many years. Although several projects in this area have been successfully completed, very few prototypes have actually been industrialized and installed in mass
[...] Read more.
The accurate perception of the surroundings of a vehicle has been the subject of study of numerous automotive researchers for many years. Although several projects in this area have been successfully completed, very few prototypes have actually been industrialized and installed in mass produced cars. This indicates that these research efforts must continue in order to improve the present systems. Moreover, the trend to include communication systems in vehicles extends the potential of these perception systems transmitting their information via wireless to other vehicles that may be affected by the surveyed environment. In this paper we present a forward collision warning system based on a laser scanner that is able to detect several potential danger situations. Decision algorithms try to determine the most convenient manoeuvre when evaluating the obstacles’ positions and speeds, road geometry, etc. Once detected, the presented system can act on the actuators of the ego-vehicle as well as transmit this information to other vehicles circulating in the same area using vehicle-to-vehicle communications. The system has been tested for overtaking manoeuvres under different scenarios and the correct actions have been performed. Full article
(This article belongs to the Special Issue New Trends towards Automatic Vehicle Control and Perception Systems)
Figures

Open AccessArticle Analysis of Continuous Steering Movement Using a Motor-Based Quantification System
Sensors 2012, 12(12), 16008-16023; doi:10.3390/s121216008
Received: 13 August 2012 / Revised: 22 October 2012 / Accepted: 16 November 2012 / Published: 22 November 2012
Cited by 2 | PDF Full-text (951 KB) | HTML Full-text | XML Full-text
Abstract
Continuous steering movement (CSM) of the upper extremity (UE) is an essential component of steering movement during vehicle driving. This study presents an integrated approach to examine the force exertion and movement pattern during CSM. We utilized a concept similar to the isokinetic
[...] Read more.
Continuous steering movement (CSM) of the upper extremity (UE) is an essential component of steering movement during vehicle driving. This study presents an integrated approach to examine the force exertion and movement pattern during CSM. We utilized a concept similar to the isokinetic dynamometer to measure the torque profiles during 180°/s constant-velocity CSM. During a steering cycle, the extremity movement can be divided into stance and swing phases based upon the hand contact information measured from the hand switch devices. Data from twelve normal young adults (six males and six females) showed that there are three typical profiles of force exertion. The two hands exhibit similar time expenditures but with asymmetric force exertions and contact times in both the clockwise (CW) and counterclockwise (CCW) steering cycles. Both hands contribute more force but with less contact time in their outward CSM directions (i.e., CW for the right hand and CCW for the left hand). These findings help us to further understand CSM and have a number of important implications for future practice in clinical training. Considerably more research is required to determine the roles of the various shoulder muscles during CSM at various speeds. Full article
(This article belongs to the Special Issue New Trends towards Automatic Vehicle Control and Perception Systems)
Figures

Open AccessArticle Vehicle Dynamic Prediction Systems with On-Line Identification of Vehicle Parameters and Road Conditions
Sensors 2012, 12(11), 15778-15800; doi:10.3390/s121115778
Received: 31 August 2012 / Revised: 17 October 2012 / Accepted: 29 October 2012 / Published: 13 November 2012
Cited by 6 | PDF Full-text (1424 KB) | HTML Full-text | XML Full-text
Abstract
This paper presents a vehicle dynamics prediction system, which consists of a sensor fusion system and a vehicle parameter identification system. This sensor fusion system can obtain the six degree-of-freedom vehicle dynamics and two road angles withoutusing a vehicle model. The vehicle parameter
[...] Read more.
This paper presents a vehicle dynamics prediction system, which consists of a sensor fusion system and a vehicle parameter identification system. This sensor fusion system can obtain the six degree-of-freedom vehicle dynamics and two road angles withoutusing a vehicle model. The vehicle parameter identification system uses the vehicle dynamics from the sensor fusion system to identify ten vehicle parameters in real time, includingvehicle mass, moment of inertial, and road friction coefficients. With above two systems, the future vehicle dynamics is predicted by using a vehicle dynamics model, obtained from the parameter identification system, to propagate with time the current vehicle state values, obtained from the sensor fusion system. Comparing with most existing literatures in this field, the proposed approach improves the prediction accuracy both by incorporating more vehicle dynamics to the prediction system and by on-line identification to minimize the vehicle modeling errors. Simulation results show that the proposed method successfully predicts the vehicle dynamics in a left-hand turn event and a rollover event. The prediction inaccuracy is 0.51% in a left-hand turn event and 27.3% in a rollover event. Full article
(This article belongs to the Special Issue New Trends towards Automatic Vehicle Control and Perception Systems)
Open AccessArticle Combination and Selection of Traffic Safety Expert Judgments for the Prevention of Driving Risks
Sensors 2012, 12(11), 14711-14729; doi:10.3390/s121114711
Received: 5 September 2012 / Revised: 14 October 2012 / Accepted: 29 October 2012 / Published: 2 November 2012
Cited by 2 | PDF Full-text (499 KB) | HTML Full-text | XML Full-text
Abstract
In this paper, we describe a new framework to combine experts’ judgments for the prevention of driving risks in a cabin truck. In addition, the methodology shows how to choose among the experts the one whose predictions fit best the environmental conditions. The
[...] Read more.
In this paper, we describe a new framework to combine experts’ judgments for the prevention of driving risks in a cabin truck. In addition, the methodology shows how to choose among the experts the one whose predictions fit best the environmental conditions. The methodology is applied over data sets obtained from a high immersive cabin truck simulator in natural driving conditions. A nonparametric model, based in Nearest Neighbors combined with Restricted Least Squared methods is developed. Three experts were asked to evaluate the driving risk using a Visual Analog Scale (VAS), in order to measure the driving risk in a truck simulator where the vehicle dynamics factors were stored. Numerical results show that the methodology is suitable for embedding in real time systems. Full article
(This article belongs to the Special Issue New Trends towards Automatic Vehicle Control and Perception Systems)
Open AccessArticle iParking: An Intelligent Indoor Location-Based Smartphone Parking Service
Sensors 2012, 12(11), 14612-14629; doi:10.3390/s121114612
Received: 3 September 2012 / Revised: 23 October 2012 / Accepted: 24 October 2012 / Published: 31 October 2012
Cited by 20 | PDF Full-text (3179 KB) | HTML Full-text | XML Full-text
Abstract
Indoor positioning technologies have been widely studied with a number of solutions being proposed, yet substantial applications and services are still fairly primitive. Taking advantage of the emerging concept of the connected car, the popularity of smartphones and mobile Internet, and precise indoor
[...] Read more.
Indoor positioning technologies have been widely studied with a number of solutions being proposed, yet substantial applications and services are still fairly primitive. Taking advantage of the emerging concept of the connected car, the popularity of smartphones and mobile Internet, and precise indoor locations, this study presents the development of a novel intelligent parking service called iParking. With the iParking service, multiple parties such as users, parking facilities and service providers are connected through Internet in a distributed architecture. The client software is a light-weight application running on a smartphone, and it works essentially based on a precise indoor positioning solution, which fuses Wireless Local Area Network (WLAN) signals and the measurements of the built-in sensors of the smartphones. The positioning accuracy, availability and reliability of the proposed positioning solution are adequate for facilitating the novel parking service. An iParking prototype has been developed and demonstrated in a real parking environment at a shopping mall. The demonstration showed how the iParking service could improve the parking experience and increase the efficiency of parking facilities. The iParking is a novel service in terms of cost- and energy-efficient solution. Full article
(This article belongs to the Special Issue New Trends towards Automatic Vehicle Control and Perception Systems)
Open AccessArticle Laser-Based Pedestrian Tracking in Outdoor Environments by Multiple Mobile Robots
Sensors 2012, 12(11), 14489-14507; doi:10.3390/s121114489
Received: 16 August 2012 / Revised: 20 October 2012 / Accepted: 21 October 2012 / Published: 29 October 2012
Cited by 13 | PDF Full-text (948 KB) | HTML Full-text | XML Full-text
Abstract
This paper presents an outdoors laser-based pedestrian tracking system using a group of mobile robots located near each other. Each robot detects pedestrians from its own laser scan image using an occupancy-grid-based method, and the robot tracks the detected pedestrians via Kalman filtering
[...] Read more.
This paper presents an outdoors laser-based pedestrian tracking system using a group of mobile robots located near each other. Each robot detects pedestrians from its own laser scan image using an occupancy-grid-based method, and the robot tracks the detected pedestrians via Kalman filtering and global-nearest-neighbor (GNN)-based data association. The tracking data is broadcast to multiple robots through intercommunication and is combined using the covariance intersection (CI) method. For pedestrian tracking, each robot identifies its own posture using real-time-kinematic GPS (RTK-GPS) and laser scan matching. Using our cooperative tracking method, all the robots share the tracking data with each other; hence, individual robots can always recognize pedestrians that are invisible to any other robot. The simulation and experimental results show that cooperating tracking provides the tracking performance better than conventional individual tracking does. Our tracking system functions in a decentralized manner without any central server, and therefore, this provides a degree of scalability and robustness that cannot be achieved by conventional centralized architectures. Full article
(This article belongs to the Special Issue New Trends towards Automatic Vehicle Control and Perception Systems)
Open AccessArticle A Two-Layers Based Approach of an Enhanced-Mapfor Urban Positioning Support
Sensors 2012, 12(11), 14508-14524; doi:10.3390/s121114508
Received: 10 September 2012 / Revised: 19 October 2012 / Accepted: 19 October 2012 / Published: 29 October 2012
Cited by 5 | PDF Full-text (1079 KB) | HTML Full-text | XML Full-text
Abstract
This paper presents a two-layer based enhanced map that can support navigationin urban environments. One layer is dedicated to describe the drivable road with a specialfocus on the accurate description of its bounds. This feature can support positioning andadvanced map-matching when compared with
[...] Read more.
This paper presents a two-layer based enhanced map that can support navigationin urban environments. One layer is dedicated to describe the drivable road with a specialfocus on the accurate description of its bounds. This feature can support positioning andadvanced map-matching when compared with standard polyline-based maps. The otherlayer depicts building heights and locations, thus enabling the detection of non-line-of-sightsignals coming from GPS satellites not in direct view. Both the concept and the methodologyfor creating these enhanced maps are shown in the paper. Full article
(This article belongs to the Special Issue New Trends towards Automatic Vehicle Control and Perception Systems)
Figures

Open AccessArticle Reliable Freestanding Position-Based Routing in Highway Scenarios
Sensors 2012, 12(11), 14262-14291; doi:10.3390/s121114262
Received: 30 August 2012 / Revised: 11 October 2012 / Accepted: 15 October 2012 / Published: 24 October 2012
Cited by 11 | PDF Full-text (751 KB) | HTML Full-text | XML Full-text
Abstract
Vehicular Ad Hoc Networks (VANETs) are considered by car manufacturers and the research community as the enabling technology to radically improve the safety, efficiency and comfort of everyday driving. However, before VANET technology can fulfill all its expected potential, several difficulties must be
[...] Read more.
Vehicular Ad Hoc Networks (VANETs) are considered by car manufacturers and the research community as the enabling technology to radically improve the safety, efficiency and comfort of everyday driving. However, before VANET technology can fulfill all its expected potential, several difficulties must be addressed. One key issue arising when working with VANETs is the complexity of the networking protocols compared to those used by traditional infrastructure networks. Therefore, proper design of the routing strategy becomes a main issue for the effective deployment of VANETs. In this paper, a reliable freestanding position-based routing algorithm (FPBR) for highway scenarios is proposed. For this scenario, several important issues such as the high mobility of vehicles and the propagation conditions may affect the performance of the routing strategy. These constraints have only been partially addressed in previous proposals. In contrast, the design approach used for developing FPBR considered the constraints imposed by a highway scenario and implements mechanisms to overcome them. FPBR performance is compared to one of the leading protocols for highway scenarios. Performance metrics show that FPBR yields similar results when considering freespace propagation conditions, and outperforms the leading protocol when considering a realistic highway path loss model. Full article
(This article belongs to the Special Issue New Trends towards Automatic Vehicle Control and Perception Systems)
Open AccessArticle An Adaptive Altitude Information Fusion Method for Autonomous Landing Processes of Small Unmanned Aerial Rotorcraft
Sensors 2012, 12(10), 13212-13224; doi:10.3390/s121013212
Received: 14 August 2012 / Revised: 21 September 2012 / Accepted: 21 September 2012 / Published: 27 September 2012
Cited by 8 | PDF Full-text (403 KB) | HTML Full-text | XML Full-text
Abstract
This paper presents an adaptive information fusion method to improve the accuracy and reliability of the altitude measurement information for small unmanned aerial rotorcraft during the landing process. Focusing on the low measurement performance of sensors mounted on small unmanned aerial rotorcraft, a
[...] Read more.
This paper presents an adaptive information fusion method to improve the accuracy and reliability of the altitude measurement information for small unmanned aerial rotorcraft during the landing process. Focusing on the low measurement performance of sensors mounted on small unmanned aerial rotorcraft, a wavelet filter is applied as a pre-filter to attenuate the high frequency noises in the sensor output. Furthermore, to improve altitude information, an adaptive extended Kalman filter based on a maximum a posteriori criterion is proposed to estimate measurement noise covariance matrix in real time. Finally, the effectiveness of the proposed method is proved by static tests, hovering flight and autonomous landing flight tests. Full article
(This article belongs to the Special Issue New Trends towards Automatic Vehicle Control and Perception Systems)
Open AccessArticle A Behavior-Based Strategy for Single and Multi-Robot Autonomous Exploration
Sensors 2012, 12(9), 12772-12797; doi:10.3390/s120912772
Received: 13 July 2012 / Revised: 6 September 2012 / Accepted: 6 September 2012 / Published: 18 September 2012
Cited by 7 | PDF Full-text (1651 KB) | HTML Full-text | XML Full-text
Abstract
In this paper, we consider the problem of autonomous exploration of unknown environments with single and multiple robots. This is a challenging task, with several potential applications. We propose a simple yet effective approach that combines a behavior-based navigation with an efficient data
[...] Read more.
In this paper, we consider the problem of autonomous exploration of unknown environments with single and multiple robots. This is a challenging task, with several potential applications. We propose a simple yet effective approach that combines a behavior-based navigation with an efficient data structure to store previously visited regions. This allows robots to safely navigate, disperse and efficiently explore the environment. A series of experiments performed using a realistic robotic simulator and a real testbed scenario demonstrate that our technique effectively distributes the robots over the environment and allows them to quickly accomplish their mission in large open spaces, narrow cluttered environments, dead-end corridors, as well as rooms with minimum exits. Full article
(This article belongs to the Special Issue New Trends towards Automatic Vehicle Control and Perception Systems)
Figures

Open AccessArticle Design of a Multi-Sensor Cooperation Travel Environment Perception System for Autonomous Vehicle
Sensors 2012, 12(9), 12386-12404; doi:10.3390/s120912386
Received: 31 July 2012 / Revised: 20 August 2012 / Accepted: 23 August 2012 / Published: 12 September 2012
Cited by 10 | PDF Full-text (2163 KB) | HTML Full-text | XML Full-text
Abstract
This paper describes the environment perception system designed for intelligent vehicle SmartV-II, which won the 2010 Future Challenge. This system utilizes the cooperation of multiple lasers and cameras to realize several necessary functions of autonomous navigation: road curb detection, lane detection and traffic
[...] Read more.
This paper describes the environment perception system designed for intelligent vehicle SmartV-II, which won the 2010 Future Challenge. This system utilizes the cooperation of multiple lasers and cameras to realize several necessary functions of autonomous navigation: road curb detection, lane detection and traffic sign recognition. Multiple single scan lasers are integrated to detect the road curb based on Z-variance method. Vision based lane detection is realized by two scans method combining with image model. Haar-like feature based method is applied for traffic sign detection and SURF matching method is used for sign classification. The results of experiments validate the effectiveness of the proposed algorithms and the whole system. Full article
(This article belongs to the Special Issue New Trends towards Automatic Vehicle Control and Perception Systems)
Open AccessArticle Recognition Stage for a Speed Supervisor Based on Road Sign Detection
Sensors 2012, 12(9), 12153-12168; doi:10.3390/s120912153
Received: 23 July 2012 / Revised: 30 August 2012 / Accepted: 31 August 2012 / Published: 5 September 2012
Cited by 2 | PDF Full-text (528 KB) | HTML Full-text | XML Full-text
Abstract
Traffic accidents are still one of the main health problems in the World. A number of measures have been applied in order to reduce the number of injuries and fatalities in roads, i.e., implementation of Advanced Driver Assistance Systems (ADAS) based on
[...] Read more.
Traffic accidents are still one of the main health problems in the World. A number of measures have been applied in order to reduce the number of injuries and fatalities in roads, i.e., implementation of Advanced Driver Assistance Systems (ADAS) based on image processing. In this paper, a real time speed supervisor based on road sign recognition that can work both in urban and non-urban environments is presented. The system is able to recognize 135 road signs, belonging to the danger, yield, prohibition obligation and indication types, and sends warning messages to the driver upon the combination of two pieces of information: the current speed of the car and the road sign symbol. The core of this paper is the comparison between the two main methods which have been traditionally used for detection and recognition of road signs: template matching (TM) and neural networks (NN). The advantages and disadvantages of the two approaches will be shown and commented. Additionally we will show how the use of well-known algorithms to avoid illumination issues reduces the amount of images needed to train a neural network. Full article
(This article belongs to the Special Issue New Trends towards Automatic Vehicle Control and Perception Systems)
Figures

Open AccessArticle Intelligent Urban Public Transportation for Accessibility Dedicated to People with Disabilities
Sensors 2012, 12(8), 10678-10692; doi:10.3390/s120810678
Received: 19 June 2012 / Revised: 23 July 2012 / Accepted: 31 July 2012 / Published: 6 August 2012
Cited by 11 | PDF Full-text (607 KB) | HTML Full-text | XML Full-text
Abstract
The traditional urban public transport system generally cannot provide an effective access service for people with disabilities, especially for disabled, wheelchair and blind (DWB) passengers. In this paper, based on advanced information & communication technologies (ICT) and green technologies (GT) concepts, a dedicated
[...] Read more.
The traditional urban public transport system generally cannot provide an effective access service for people with disabilities, especially for disabled, wheelchair and blind (DWB) passengers. In this paper, based on advanced information & communication technologies (ICT) and green technologies (GT) concepts, a dedicated public urban transportation service access system named Mobi+ has been introduced, which facilitates the mobility of DWB passengers. The Mobi+ project consists of three subsystems: a wireless communication subsystem, which provides the data exchange and network connection services between buses and stations in the complex urban environments; the bus subsystem, which provides the DWB class detection & bus arrival notification services; and the station subsystem, which implements the urban environmental surveillance & bus auxiliary access services. The Mobi+ card that supports multi-microcontroller multi-transceiver adopts the fault-tolerant component-based hardware architecture, in which the dedicated embedded system software, i.e., operating system micro-kernel and wireless protocol, has been integrated. The dedicated Mobi+ embedded system provides the fault-tolerant resource awareness communication and scheduling mechanism to ensure the reliability in data exchange and service provision. At present, the Mobi+ system has been implemented on the buses and stations of line ‘2’ in the city of Clermont-Ferrand (France). The experiential results show that, on one hand the Mobi+ prototype system reaches the design expectations and provides an effective urban bus access service for people with disabilities; on the other hand the Mobi+ system is easily to deploy in the buses and at bus stations thanks to its low energy consumption and small form factor. Full article
(This article belongs to the Special Issue New Trends towards Automatic Vehicle Control and Perception Systems)
Open AccessArticle AUV SLAM and Experiments Using a Mechanical Scanning Forward-Looking Sonar
Sensors 2012, 12(7), 9386-9410; doi:10.3390/s120709386
Received: 18 May 2012 / Revised: 27 June 2012 / Accepted: 28 June 2012 / Published: 9 July 2012
Cited by 21 | PDF Full-text (2047 KB) | HTML Full-text | XML Full-text
Abstract
Navigation technology is one of the most important challenges in the applications of autonomous underwater vehicles (AUVs) which navigate in the complex undersea environment. The ability of localizing a robot and accurately mapping its surroundings simultaneously, namely the simultaneous localization and mapping (SLAM)
[...] Read more.
Navigation technology is one of the most important challenges in the applications of autonomous underwater vehicles (AUVs) which navigate in the complex undersea environment. The ability of localizing a robot and accurately mapping its surroundings simultaneously, namely the simultaneous localization and mapping (SLAM) problem, is a key prerequisite of truly autonomous robots. In this paper, a modified-FastSLAM algorithm is proposed and used in the navigation for our C-Ranger research platform, an open-frame AUV. A mechanical scanning imaging sonar is chosen as the active sensor for the AUV. The modified-FastSLAM implements the update relying on the on-board sensors of C-Ranger. On the other hand, the algorithm employs the data association which combines the single particle maximum likelihood method with modified negative evidence method, and uses the rank-based resampling to overcome the particle depletion problem. In order to verify the feasibility of the proposed methods, both simulation experiments and sea trials for C-Ranger are conducted. The experimental results show the modified-FastSLAM employed for the navigation of the C-Ranger AUV is much more effective and accurate compared with the traditional methods. Full article
(This article belongs to the Special Issue New Trends towards Automatic Vehicle Control and Perception Systems)
Open AccessArticle Observability Analysis of a Matrix Kalman Filter-Based Navigation System Using Visual/Inertial/Magnetic Sensors
Sensors 2012, 12(7), 8877-8894; doi:10.3390/s120708877
Received: 14 May 2012 / Revised: 14 June 2012 / Accepted: 18 June 2012 / Published: 27 June 2012
Cited by 10 | PDF Full-text (298 KB) | HTML Full-text | XML Full-text
Abstract
A matrix Kalman filter (MKF) has been implemented for an integrated navigation system using visual/inertial/magnetic sensors. The MKF rearranges the original nonlinear process model in a pseudo-linear process model. We employ the observability rank criterion based on Lie derivatives to verify the conditions
[...] Read more.
A matrix Kalman filter (MKF) has been implemented for an integrated navigation system using visual/inertial/magnetic sensors. The MKF rearranges the original nonlinear process model in a pseudo-linear process model. We employ the observability rank criterion based on Lie derivatives to verify the conditions under which the nonlinear system is observable. It has been proved that such observability conditions are: (a) at least one degree of rotational freedom is excited, and (b) at least two linearly independent horizontal lines and one vertical line are observed. Experimental results have validated the correctness of these observability conditions. Full article
(This article belongs to the Special Issue New Trends towards Automatic Vehicle Control and Perception Systems)
Open AccessArticle Enhancing Positioning Accuracy in Urban Terrain by Fusing Data from a GPS Receiver, Inertial Sensors, Stereo-Camera and Digital Maps for Pedestrian Navigation
Sensors 2012, 12(6), 6764-6801; doi:10.3390/s120606764
Received: 6 March 2012 / Revised: 19 April 2012 / Accepted: 29 April 2012 / Published: 25 May 2012
Cited by 13 | PDF Full-text (1856 KB) | HTML Full-text | XML Full-text
Abstract
The paper presents an algorithm for estimating a pedestrian location in an urban environment. The algorithm is based on the particle filter and uses different data sources: a GPS receiver, inertial sensors, probability maps and a stereo camera. Inertial sensors are used to
[...] Read more.
The paper presents an algorithm for estimating a pedestrian location in an urban environment. The algorithm is based on the particle filter and uses different data sources: a GPS receiver, inertial sensors, probability maps and a stereo camera. Inertial sensors are used to estimate a relative displacement of a pedestrian. A gyroscope estimates a change in the heading direction. An accelerometer is used to count a pedestrian’s steps and their lengths. The so-called probability maps help to limit GPS inaccuracy by imposing constraints on pedestrian kinematics, e.g., it is assumed that a pedestrian cannot cross buildings, fences etc. This limits position inaccuracy to ca. 10 m. Incorporation of depth estimates derived from a stereo camera that are compared to the 3D model of an environment has enabled further reduction of positioning errors. As a result, for 90% of the time, the algorithm is able to estimate a pedestrian location with an error smaller than 2 m, compared to an error of 6.5 m for a navigation based solely on GPS. Full article
(This article belongs to the Special Issue New Trends towards Automatic Vehicle Control and Perception Systems)
Figures

Journal Contact

MDPI AG
Sensors Editorial Office
St. Alban-Anlage 66, 4052 Basel, Switzerland
sensors@mdpi.com
Tel. +41 61 683 77 34
Fax: +41 61 302 89 18
Editorial Board
Contact Details Submit to Sensors
Back to Top