sensors-logo

Journal Browser

Journal Browser

Stereo Vision-Based Perception, Navigation and Control for Intelligent Autonomous Systems

A special issue of Sensors (ISSN 1424-8220). This special issue belongs to the section "Sensors and Robotics".

Deadline for manuscript submissions: closed (1 March 2022) | Viewed by 13102

Special Issue Editors

Department of Automatic Control and Applied Informatics, Gheorghe Asachi Technical University of Iasi, Iasi, Romania
Interests: robotics; visual servoing; computer vision; assistive technologies; intelligent systems
Special Issues, Collections and Topics in MDPI journals
Department of Computer Science and Engineering, Jaume I University, Castellon de la Plana, Spain
Interests: AI and robotics programming; robotics education; active perceptual learning for manipulation; visual servoing and perceptual grounding
Special Issues, Collections and Topics in MDPI journals

Special Issue Information

Dear Colleagues,

Vision is one of the most important awareness extensions that can be included in a system. With the technological advances obtained in the development of reliable artificial vision, the interactions between different autonomous systems have become more efficient and versatile.

The emerging role of machine vision in the motion planning and control of intelligent autonomous systems is one of the most discussed topics in multiple research areas (computer vision, robotics, artificial intelligence, assistive devices, etc.). Scene representation methods organize information from all sensors and data sources to build an interface between perception, navigation, and control. Stereo vision systems are among the most commonly used sensors to gather data from 3D environments. Stereo vision applications vary from autonomous driving to human–robot interactions and assisting devices for the visually impaired.

The key aim of this Special Issue is to bring together innovative research that uses off-the-shelf or custom-made stereo vision devices to extend the capabilities of intelligent autonomous systems. Contributions from all fields related to the integration of stereo vision into perception and navigation architectures are of interest, particularly including, but not limited to, the following topics:

  • Stereo vision for autonomous UAVs;
  • Stereo-vision-based collaborative perceptions for teams of mobile robots;
  • Stereo vision for autonomous driving;
  • Stereo-vision-based visual servoing;
  • Stereo-vision-based human–robot skill transfer;
  • Stereo vision perception and navigation for the visually impaired;
  • Stereo omnidirectional vision devices and applications;
  • Biologically-inspired stereo vision for robotics;
  • Good experimentation and reproducibility in robotic stereo systems.

Dr. Adrian Burlacu
Dr. Enric Cervera
Guest Editors

Manuscript Submission Information

Manuscripts should be submitted online at www.mdpi.com by registering and logging in to this website. Once you are registered, click here to go to the submission form. Manuscripts can be submitted until the deadline. All submissions that pass pre-check are peer-reviewed. Accepted papers will be published continuously in the journal (as soon as accepted) and will be listed together on the special issue website. Research articles, review articles as well as short communications are invited. For planned papers, a title and short abstract (about 100 words) can be sent to the Editorial Office for announcement on this website.

Submitted manuscripts should not have been published previously, nor be under consideration for publication elsewhere (except conference proceedings papers). All manuscripts are thoroughly refereed through a single-blind peer-review process. A guide for authors and other relevant information for submission of manuscripts is available on the Instructions for Authors page. Sensors is an international peer-reviewed open access semimonthly journal published by MDPI.

Please visit the Instructions for Authors page before submitting a manuscript. The Article Processing Charge (APC) for publication in this open access journal is 2600 CHF (Swiss Francs). Submitted papers should be well formatted and use good English. Authors may use MDPI's English editing service prior to publication or during author revisions.

Keywords

  • signal processing
  • data fusion and deep learning in sensor systems
  • human–computer interaction
  • localization and object tracking
  • image sensors
  • action recognition
  • 3D sensing
  • wearable sensors
  • devices and electronics

Published Papers (4 papers)

Order results
Result details
Select all
Export citation of selected articles as:

Research

12 pages, 2110 KiB  
Article
A New Deep Model for Detecting Multiple Moving Targets in Real Traffic Scenarios: Machine Vision-Based Vehicles
by Xiaowei Xu, Hao Xiong, Liu Zhan, Grzegorz Królczyk, Rafal Stanislawski, Paolo Gardoni and Zhixiong Li
Sensors 2022, 22(10), 3742; https://doi.org/10.3390/s22103742 - 14 May 2022
Viewed by 1506
Abstract
When performing multiple target detection, it is difficult to detect small and occluded targets in complex traffic scenes. To this end, an improved YOLOv4 detection method is proposed in this work. Firstly, the network structure of the original YOLOv4 is adjusted, and the [...] Read more.
When performing multiple target detection, it is difficult to detect small and occluded targets in complex traffic scenes. To this end, an improved YOLOv4 detection method is proposed in this work. Firstly, the network structure of the original YOLOv4 is adjusted, and the 4× down-sampling feature map of the backbone network is introduced into the neck network of the YOLOv4 model to splice the feature map with 8× down-sampling to form a four-scale detection structure, which enhances the fusion of deep and shallow semantics information of the feature map to improve the detection accuracy of small targets. Then, the convolutional block attention module (CBAM) is added to the model neck network to enhance the learning ability for features in space and on channels. Lastly, the detection rate of the occluded target is improved by using the soft non-maximum suppression (Soft-NMS) algorithm based on the distance intersection over union (DIoU) to avoid deleting the bounding boxes. On the KITTI dataset, experimental evaluation is performed and the analysis results demonstrate that the proposed detection model can effectively improve the multiple target detection accuracy, and the mean average accuracy (mAP) of the improved YOLOv4 model reaches 81.23%, which is 3.18% higher than the original YOLOv4; and the computation speed of the proposed model reaches 47.32 FPS. Compared with existing popular detection models, the proposed model produces higher detection accuracy and computation speed. Full article
Show Figures

Figure 1

22 pages, 5463 KiB  
Article
Dynamic Object Tracking on Autonomous UAV System for Surveillance Applications
by Li-Yu Lo, Chi Hao Yiu, Yu Tang, An-Shik Yang, Boyang Li and Chih-Yung Wen
Sensors 2021, 21(23), 7888; https://doi.org/10.3390/s21237888 - 27 Nov 2021
Cited by 36 | Viewed by 6230
Abstract
The ever-burgeoning growth of autonomous unmanned aerial vehicles (UAVs) has demonstrated a promising platform for utilization in real-world applications. In particular, a UAV equipped with a vision system could be leveraged for surveillance applications. This paper proposes a learning-based UAV system for achieving [...] Read more.
The ever-burgeoning growth of autonomous unmanned aerial vehicles (UAVs) has demonstrated a promising platform for utilization in real-world applications. In particular, a UAV equipped with a vision system could be leveraged for surveillance applications. This paper proposes a learning-based UAV system for achieving autonomous surveillance, in which the UAV can be of assistance in autonomously detecting, tracking, and following a target object without human intervention. Specifically, we adopted the YOLOv4-Tiny algorithm for semantic object detection and then consolidated it with a 3D object pose estimation method and Kalman filter to enhance the perception performance. In addition, UAV path planning for a surveillance maneuver is integrated to complete the fully autonomous system. The perception module is assessed on a quadrotor UAV, while the whole system is validated through flight experiments. The experiment results verified the robustness, effectiveness, and reliability of the autonomous object tracking UAV system in performing surveillance tasks. The source code is released to the research community for future reference. Full article
Show Figures

Figure 1

18 pages, 2666 KiB  
Article
Optimizing 3D Convolution Kernels on Stereo Matching for Resource Efficient Computations
by Jianqiang Xiao, Dianbo Ma and Satoshi Yamane
Sensors 2021, 21(20), 6808; https://doi.org/10.3390/s21206808 - 13 Oct 2021
Cited by 3 | Viewed by 2160
Abstract
Despite recent stereo matching algorithms achieving significant results on public benchmarks, the problem of requiring heavy computation remains unsolved. Most works focus on designing an architecture to reduce the computational complexity, while we take aim at optimizing 3D convolution kernels on the Pyramid [...] Read more.
Despite recent stereo matching algorithms achieving significant results on public benchmarks, the problem of requiring heavy computation remains unsolved. Most works focus on designing an architecture to reduce the computational complexity, while we take aim at optimizing 3D convolution kernels on the Pyramid Stereo Matching Network (PSMNet) for solving the problem. In this paper, we design a series of comparative experiments exploring the performance of well-known convolution kernels on PSMNet. Our model saves the computational complexity from 256.66 G MAdd (Multiply-Add operations) to 69.03 G MAdd (198.47 G MAdd to 10.84 G MAdd for only considering 3D convolutional neural networks) without losing accuracy. On Scene Flow and KITTI 2015 datasets, our model achieves results comparable to the state-of-the-art with a low computational cost. Full article
Show Figures

Figure 1

14 pages, 17226 KiB  
Article
A Joint 2D-3D Complementary Network for Stereo Matching
by Xiaogang Jia, Wei Chen, Zhengfa Liang, Xin Luo, Mingfei Wu, Chen Li, Yulin He, Yusong Tan and Libo Huang
Sensors 2021, 21(4), 1430; https://doi.org/10.3390/s21041430 - 18 Feb 2021
Cited by 5 | Viewed by 2220
Abstract
Stereo matching is an important research field of computer vision. Due to the dimension of cost aggregation, current neural network-based stereo methods are difficult to trade-off speed and accuracy. To this end, we integrate fast 2D stereo methods with accurate 3D networks to [...] Read more.
Stereo matching is an important research field of computer vision. Due to the dimension of cost aggregation, current neural network-based stereo methods are difficult to trade-off speed and accuracy. To this end, we integrate fast 2D stereo methods with accurate 3D networks to improve performance and reduce running time. We leverage a 2D encoder-decoder network to generate a rough disparity map and construct a disparity range to guide the 3D aggregation network, which can significantly improve the accuracy and reduce the computational cost. We use a stacked hourglass structure to refine the disparity from coarse to fine. We evaluated our method on three public datasets. According to the KITTI official website results, Our network can generate an accurate result in 80 ms on a modern GPU. Compared to other 2D stereo networks (AANet, DeepPruner, FADNet, etc.), our network has a big improvement in accuracy. Meanwhile, it is significantly faster than other 3D stereo networks (5× than PSMNet, 7.5× than CSN and 22.5× than GANet, etc.), demonstrating the effectiveness of our method. Full article
Show Figures

Figure 1

Back to TopTop