sensors-logo

Journal Browser

Journal Browser

Vision-Based Sensors in Navigation: Image Processing and Understanding

A special issue of Sensors (ISSN 1424-8220). This special issue belongs to the section "Sensor Networks".

Deadline for manuscript submissions: closed (31 October 2022) | Viewed by 14372

Special Issue Editor


E-Mail Website
Guest Editor
School of Electrical Engineering and Computer Science, University of Ottawa, Ottawa, ON, Canada
Interests: physics-based and data driven modeling for computer graphics and computer vision; visual tracking; image-based models; virtual environments and computational photography

Special Issue Information

DearColleagues,

The rising trend of vision-based measurement in all areas of sensing puts the focus on image processing and image understanding. This trend is supported by the proliferation of cameras in our everyday lives and in many application areas due, in part, to the low cost of these sensors. The utility of the acquired images and videos from these sensors has been growing with the emergence of machine learning as the dominant paradigm for processing these images. A key application area of visual sensors, image processing, and image understanding is in the area of navigation. This includes the autonomous or assisted navigation of vehicles of all kinds but also the modeling and mapping of environments for augmented reality and in automated manufacturing as well as the localization in these environments based on visual sensors. This Special Issue therefore targets novel image processing and understanding using machine learning and other approaches for the successful deployment of sensors in navigation. There are many remaining challenges, including the robustness of methods in dynamic environment, the efficient use of data, and the reduction in demand for human supervision, e.g., through ground truth annotations. The targeted application areas include vehicular and robotic navigation, navigation and mapping in augmented reality, and applications in machine vision and manufacturing.

Prof. Dr. Jochen Lang
Guest Editor

Manuscript Submission Information

Manuscripts should be submitted online at www.mdpi.com by registering and logging in to this website. Once you are registered, click here to go to the submission form. Manuscripts can be submitted until the deadline. All submissions that pass pre-check are peer-reviewed. Accepted papers will be published continuously in the journal (as soon as accepted) and will be listed together on the special issue website. Research articles, review articles as well as short communications are invited. For planned papers, a title and short abstract (about 100 words) can be sent to the Editorial Office for announcement on this website.

Submitted manuscripts should not have been published previously, nor be under consideration for publication elsewhere (except conference proceedings papers). All manuscripts are thoroughly refereed through a single-blind peer-review process. A guide for authors and other relevant information for submission of manuscripts is available on the Instructions for Authors page. Sensors is an international peer-reviewed open access semimonthly journal published by MDPI.

Please visit the Instructions for Authors page before submitting a manuscript. The Article Processing Charge (APC) for publication in this open access journal is 2600 CHF (Swiss Francs). Submitted papers should be well formatted and use good English. Authors may use MDPI's English editing service prior to publication or during author revisions.

Keywords

  • Vision-based measurement
  • Image processing
  • Machine learning
  • Localization
  • Visual-SLAM
  • Navigation
  • Autonomous vehicles

Published Papers (5 papers)

Order results
Result details
Select all
Export citation of selected articles as:

Research

22 pages, 5956 KiB  
Article
AWANet: Attentive-Aware Wide-Kernels Asymmetrical Network with Blended Contour Information for Salient Object Detection
by Inam Ullah, Muwei Jian, Kashif Shaheed, Sumaira Hussain, Yuling Ma, Lixian Xu and Khan Muhammad
Sensors 2022, 22(24), 9667; https://doi.org/10.3390/s22249667 - 09 Dec 2022
Cited by 5 | Viewed by 1650
Abstract
Although deep learning-based techniques for salient object detection have considerably improved over recent years, estimated saliency maps still exhibit imprecise predictions owing to the internal complexity and indefinite boundaries of salient objects of varying sizes. Existing methods emphasize the design of an exemplary [...] Read more.
Although deep learning-based techniques for salient object detection have considerably improved over recent years, estimated saliency maps still exhibit imprecise predictions owing to the internal complexity and indefinite boundaries of salient objects of varying sizes. Existing methods emphasize the design of an exemplary structure to integrate multi-level features by employing multi-scale features and attention modules to filter salient regions from cluttered scenarios. We propose a saliency detection network based on three novel contributions. First, we use a dense feature extraction unit (DFEU) by introducing large kernels of asymmetric and grouped-wise convolutions with channel reshuffling. The DFEU extracts semantically enriched features with large receptive fields and reduces the gridding problem and parameter sizes for subsequent operations. Second, we suggest a cross-feature integration unit (CFIU) that extracts semantically enriched features from their high resolutions using dense short connections and sub-samples the integrated information into different attentional branches based on the inputs received for each stage of the backbone. The embedded independent attentional branches can observe the importance of the sub-regions for a salient object. With the constraint-wise growth of the sub-attentional branches at various stages, the CFIU can efficiently avoid global and local feature dilution effects by extracting semantically enriched features via dense short-connections from high and low levels. Finally, a contour-aware saliency refinement unit (CSRU) was devised by blending the contour and contextual features in a progressive dense connected fashion to assist the model toward obtaining more accurate saliency maps with precise boundaries in complex and perplexing scenarios. Our proposed model was analyzed with ResNet-50 and VGG-16 and outperforms most contemporary techniques with fewer parameters. Full article
Show Figures

Figure 1

28 pages, 12652 KiB  
Article
Single-Shot Intrinsic Calibration for Autonomous Driving Applications
by Abraham Monrroy Cano, Jacob Lambert, Masato Edahiro and Shinpei Kato
Sensors 2022, 22(5), 2067; https://doi.org/10.3390/s22052067 - 07 Mar 2022
Cited by 3 | Viewed by 2956
Abstract
In this paper, we present a first-of-its-kind method to determine clear and repeatable guidelines for single-shot camera intrinsic calibration using multiple checkerboards. With the help of a simulator, we found the position and rotation intervals that allow optimal corner detector performance. With these [...] Read more.
In this paper, we present a first-of-its-kind method to determine clear and repeatable guidelines for single-shot camera intrinsic calibration using multiple checkerboards. With the help of a simulator, we found the position and rotation intervals that allow optimal corner detector performance. With these intervals defined, we generated thousands of multiple checkerboard poses and evaluated them using ground truth values, in order to obtain configurations that lead to accurate camera intrinsic parameters. We used these results to define guidelines to create multiple checkerboard setups. We tested and verified the robustness of the guidelines in the simulator, and additionally in the real world with cameras with different focal lengths and distortion profiles, which help generalize our findings. Finally, we used a 3D LiDAR (Light Detection and Ranging) to project and confirm the quality of the intrinsic parameters projection. We found it possible to obtain accurate intrinsic parameters for 3D applications, with at least seven checkerboard setups in a single image that follow our positioning guidelines. Full article
Show Figures

Figure 1

19 pages, 5988 KiB  
Article
Semantic VPS for Smartphone Localization in Challenging Urban Environments
by Max Jwo Lem Lee, Li-Ta Hsu and Hoi-Fung Ng
Sensors 2021, 21(18), 6137; https://doi.org/10.3390/s21186137 - 13 Sep 2021
Cited by 1 | Viewed by 1718
Abstract
Accurate smartphone-based outdoor localization systems in deep urban canyons are increasingly needed for various IoT applications. As smart cities have developed, building information modeling (BIM) has become widely available. This article, for the first time, presents a semantic Visual Positioning System (VPS) for [...] Read more.
Accurate smartphone-based outdoor localization systems in deep urban canyons are increasingly needed for various IoT applications. As smart cities have developed, building information modeling (BIM) has become widely available. This article, for the first time, presents a semantic Visual Positioning System (VPS) for accurate and robust position estimation in urban canyons where the global navigation satellite system (GNSS) tends to fail. In the offline stage, a material segmented BIM is used to generate segmented images. In the online stage, an image is taken with a smartphone camera that provides textual information about the surrounding environment. The approach utilizes computer vision algorithms to segment between the different types of material class identified in the smartphone image. A semantic VPS method is then used to match the segmented generated images with the segmented smartphone image. Each generated image contains position information in terms of latitude, longitude, altitude, yaw, pitch, and roll. The candidate with the maximum likelihood is regarded as the precise position of the user. The positioning result achieved an accuracy of 2.0 m among high-rise buildings on a street, 5.5 m in a dense foliage environment, and 15.7 m in an alleyway. This represents an improvement in positioning of 45% compared to the current state-of-the-art method. The estimation of yaw achieved accuracy of 2.3°, an eight-fold improvement compared to the smartphone IMU. Full article
Show Figures

Figure 1

17 pages, 2508 KiB  
Article
DOE-SLAM: Dynamic Object Enhanced Visual SLAM
by Xiao Hu and Jochen Lang
Sensors 2021, 21(9), 3091; https://doi.org/10.3390/s21093091 - 29 Apr 2021
Cited by 7 | Viewed by 4004
Abstract
In this paper, we formulate a novel strategy to adapt monocular-vision-based simultaneous localization and mapping (vSLAM) to dynamic environments. When enough background features can be captured, our system not only tracks the camera trajectory based on static background features but also estimates the [...] Read more.
In this paper, we formulate a novel strategy to adapt monocular-vision-based simultaneous localization and mapping (vSLAM) to dynamic environments. When enough background features can be captured, our system not only tracks the camera trajectory based on static background features but also estimates the foreground object motion from object features. In cases when a moving object obstructs too many background features for successful camera tracking from the background, our system can exploit the features from the object and the prediction of the object motion to estimate the camera pose. We use various synthetic and real-world test scenarios and the well-known TUM sequences to evaluate the capabilities of our system. The experiments show that we achieve higher pose estimation accuracy and robustness over state-of-the-art monocular vSLAM systems. Full article
Show Figures

Figure 1

22 pages, 11325 KiB  
Article
Weakly-Supervised Recommended Traversable Area Segmentation Using Automatically Labeled Images for Autonomous Driving in Pedestrian Environment with No Edges
by Yuya Onozuka, Ryosuke Matsumi and Motoki Shino
Sensors 2021, 21(2), 437; https://doi.org/10.3390/s21020437 - 09 Jan 2021
Cited by 6 | Viewed by 2878
Abstract
Detection of traversable areas is essential to navigation of autonomous personal mobility systems in unknown pedestrian environments. However, traffic rules may recommend or require driving in specified areas, such as sidewalks, in environments where roadways and sidewalks coexist. Therefore, it is necessary for [...] Read more.
Detection of traversable areas is essential to navigation of autonomous personal mobility systems in unknown pedestrian environments. However, traffic rules may recommend or require driving in specified areas, such as sidewalks, in environments where roadways and sidewalks coexist. Therefore, it is necessary for such autonomous mobility systems to estimate the areas that are mechanically traversable and recommended by traffic rules and to navigate based on this estimation. In this paper, we propose a method for weakly-supervised recommended traversable area segmentation in environments with no edges using automatically labeled images based on paths selected by humans. This approach is based on the idea that a human-selected driving path more accurately reflects both mechanical traversability and human understanding of traffic rules and visual information. In addition, we propose a data augmentation method and a loss weighting method for detecting the appropriate recommended traversable area from a single human-selected path. Evaluation of the results showed that the proposed learning methods are effective for recommended traversable area detection and found that weakly-supervised semantic segmentation using human-selected path information is useful for recommended area detection in environments with no edges. Full article
Show Figures

Figure 1

Back to TopTop