sensors-logo

Journal Browser

Journal Browser

Special Issue "Stereo Vision-Based Perception, Navigation and Control for Intelligent Autonomous Systems"

A special issue of Sensors (ISSN 1424-8220). This special issue belongs to the section "Sensors and Robotics".

Deadline for manuscript submissions: 31 January 2022.

Special Issue Editors

Dr. Adrian Burlacu
E-Mail Website
Guest Editor
Department of Automatic Control and Applied Informatics, Gheorghe Asachi Technical University of Iasi, Iasi, Romania
Interests: robotics; visual servoing; computer vision; assistive technologies; intelligent systems
Dr. Enric Cervera
E-Mail Website
Guest Editor
Department of Computer Science and Engineering, Jaume I University, Castellon de la Plana, Spain
Interests: AI and robotics programming; robotics education; active perceptual learning for manipulation; visual servoing and perceptual grounding
Special Issues, Collections and Topics in MDPI journals

Special Issue Information

Dear Colleagues,

Vision is one of the most important awareness extensions that can be included in a system. With the technological advances obtained in the development of reliable artificial vision, the interactions between different autonomous systems have become more efficient and versatile.

The emerging role of machine vision in the motion planning and control of intelligent autonomous systems is one of the most discussed topics in multiple research areas (computer vision, robotics, artificial intelligence, assistive devices, etc.). Scene representation methods organize information from all sensors and data sources to build an interface between perception, navigation, and control. Stereo vision systems are among the most commonly used sensors to gather data from 3D environments. Stereo vision applications vary from autonomous driving to human–robot interactions and assisting devices for the visually impaired.

The key aim of this Special Issue is to bring together innovative research that uses off-the-shelf or custom-made stereo vision devices to extend the capabilities of intelligent autonomous systems. Contributions from all fields related to the integration of stereo vision into perception and navigation architectures are of interest, particularly including, but not limited to, the following topics:

  • Stereo vision for autonomous UAVs;
  • Stereo-vision-based collaborative perceptions for teams of mobile robots;
  • Stereo vision for autonomous driving;
  • Stereo-vision-based visual servoing;
  • Stereo-vision-based human–robot skill transfer;
  • Stereo vision perception and navigation for the visually impaired;
  • Stereo omnidirectional vision devices and applications;
  • Biologically-inspired stereo vision for robotics;
  • Good experimentation and reproducibility in robotic stereo systems.

Dr. Adrian Burlacu
Dr. Enric Cervera
Guest Editors

Manuscript Submission Information

Manuscripts should be submitted online at www.mdpi.com by registering and logging in to this website. Once you are registered, click here to go to the submission form. Manuscripts can be submitted until the deadline. All papers will be peer-reviewed. Accepted papers will be published continuously in the journal (as soon as accepted) and will be listed together on the special issue website. Research articles, review articles as well as short communications are invited. For planned papers, a title and short abstract (about 100 words) can be sent to the Editorial Office for announcement on this website.

Submitted manuscripts should not have been published previously, nor be under consideration for publication elsewhere (except conference proceedings papers). All manuscripts are thoroughly refereed through a single-blind peer-review process. A guide for authors and other relevant information for submission of manuscripts is available on the Instructions for Authors page. Sensors is an international peer-reviewed open access semimonthly journal published by MDPI.

Please visit the Instructions for Authors page before submitting a manuscript. The Article Processing Charge (APC) for publication in this open access journal is 2200 CHF (Swiss Francs). Submitted papers should be well formatted and use good English. Authors may use MDPI's English editing service prior to publication or during author revisions.

Keywords

  • signal processing
  • data fusion and deep learning in sensor systems
  • human–computer interaction
  • localization and object tracking
  • image sensors
  • action recognition
  • 3D sensing
  • wearable sensors
  • devices and electronics

Published Papers (3 papers)

Order results
Result details
Select all
Export citation of selected articles as:

Research

Article
Dynamic Object Tracking on Autonomous UAV System for Surveillance Applications
Sensors 2021, 21(23), 7888; https://doi.org/10.3390/s21237888 - 27 Nov 2021
Viewed by 250
Abstract
The ever-burgeoning growth of autonomous unmanned aerial vehicles (UAVs) has demonstrated a promising platform for utilization in real-world applications. In particular, a UAV equipped with a vision system could be leveraged for surveillance applications. This paper proposes a learning-based UAV system for achieving [...] Read more.
The ever-burgeoning growth of autonomous unmanned aerial vehicles (UAVs) has demonstrated a promising platform for utilization in real-world applications. In particular, a UAV equipped with a vision system could be leveraged for surveillance applications. This paper proposes a learning-based UAV system for achieving autonomous surveillance, in which the UAV can be of assistance in autonomously detecting, tracking, and following a target object without human intervention. Specifically, we adopted the YOLOv4-Tiny algorithm for semantic object detection and then consolidated it with a 3D object pose estimation method and Kalman filter to enhance the perception performance. In addition, UAV path planning for a surveillance maneuver is integrated to complete the fully autonomous system. The perception module is assessed on a quadrotor UAV, while the whole system is validated through flight experiments. The experiment results verified the robustness, effectiveness, and reliability of the autonomous object tracking UAV system in performing surveillance tasks. The source code is released to the research community for future reference. Full article
Show Figures

Figure 1

Article
Optimizing 3D Convolution Kernels on Stereo Matching for Resource Efficient Computations
Sensors 2021, 21(20), 6808; https://doi.org/10.3390/s21206808 - 13 Oct 2021
Viewed by 365
Abstract
Despite recent stereo matching algorithms achieving significant results on public benchmarks, the problem of requiring heavy computation remains unsolved. Most works focus on designing an architecture to reduce the computational complexity, while we take aim at optimizing 3D convolution kernels on the Pyramid [...] Read more.
Despite recent stereo matching algorithms achieving significant results on public benchmarks, the problem of requiring heavy computation remains unsolved. Most works focus on designing an architecture to reduce the computational complexity, while we take aim at optimizing 3D convolution kernels on the Pyramid Stereo Matching Network (PSMNet) for solving the problem. In this paper, we design a series of comparative experiments exploring the performance of well-known convolution kernels on PSMNet. Our model saves the computational complexity from 256.66 G MAdd (Multiply-Add operations) to 69.03 G MAdd (198.47 G MAdd to 10.84 G MAdd for only considering 3D convolutional neural networks) without losing accuracy. On Scene Flow and KITTI 2015 datasets, our model achieves results comparable to the state-of-the-art with a low computational cost. Full article
Show Figures

Figure 1

Article
A Joint 2D-3D Complementary Network for Stereo Matching
Sensors 2021, 21(4), 1430; https://doi.org/10.3390/s21041430 - 18 Feb 2021
Cited by 1 | Viewed by 753
Abstract
Stereo matching is an important research field of computer vision. Due to the dimension of cost aggregation, current neural network-based stereo methods are difficult to trade-off speed and accuracy. To this end, we integrate fast 2D stereo methods with accurate 3D networks to [...] Read more.
Stereo matching is an important research field of computer vision. Due to the dimension of cost aggregation, current neural network-based stereo methods are difficult to trade-off speed and accuracy. To this end, we integrate fast 2D stereo methods with accurate 3D networks to improve performance and reduce running time. We leverage a 2D encoder-decoder network to generate a rough disparity map and construct a disparity range to guide the 3D aggregation network, which can significantly improve the accuracy and reduce the computational cost. We use a stacked hourglass structure to refine the disparity from coarse to fine. We evaluated our method on three public datasets. According to the KITTI official website results, Our network can generate an accurate result in 80 ms on a modern GPU. Compared to other 2D stereo networks (AANet, DeepPruner, FADNet, etc.), our network has a big improvement in accuracy. Meanwhile, it is significantly faster than other 3D stereo networks (5× than PSMNet, 7.5× than CSN and 22.5× than GANet, etc.), demonstrating the effectiveness of our method. Full article
Show Figures

Figure 1

Back to TopTop