sensors-logo

Journal Browser

Journal Browser

Sensing and Processing for 3D Computer Vision: 3rd Edition

A special issue of Sensors (ISSN 1424-8220). This special issue belongs to the section "Sensing and Imaging".

Deadline for manuscript submissions: closed (31 December 2024) | Viewed by 3033

Special Issue Editor


E-Mail Website
Guest Editor
Computer Vision and Systems Laboratory, Laval University, 1665 Rue de l’Universite, Universite Laval, Quebec City, QC G1V 0A6, Canada
Interests: 3D sensors; active vision; 3D image processing and understanding; modelling; geometry; 3D sensing and modelling for augmented and virtual reality; applications of 3D computer vision
Special Issues, Collections and Topics in MDPI journals

Special Issue Information

Dear Colleagues,

This Special Issue welcomes the submission of research articles that address 3D sensing technology in addition to the use of advanced 3D sensors in computer vision. Original contributions on novel active 3D sensors, stereo reconstruction approaches and sensor calibration techniques are solicited. Articles on 3D point cloud/mesh processing, geometric modeling, shape representation and recognition are also of interest in this Special Issue. Articles on the application of 3D sensing and modeling to metrology, industrial inspection and quality control, augmented/virtual reality, heritage preservation, arts and other fields are also welcome.

Prof. Dr. Denis Laurendeau
Guest Editor

Manuscript Submission Information

Manuscripts should be submitted online at www.mdpi.com by registering and logging in to this website. Once you are registered, click here to go to the submission form. Manuscripts can be submitted until the deadline. All submissions that pass pre-check are peer-reviewed. Accepted papers will be published continuously in the journal (as soon as accepted) and will be listed together on the special issue website. Research articles, review articles as well as short communications are invited. For planned papers, a title and short abstract (about 100 words) can be sent to the Editorial Office for announcement on this website.

Submitted manuscripts should not have been published previously, nor be under consideration for publication elsewhere (except conference proceedings papers). All manuscripts are thoroughly refereed through a single-blind peer-review process. A guide for authors and other relevant information for submission of manuscripts is available on the Instructions for Authors page. Sensors is an international peer-reviewed open access semimonthly journal published by MDPI.

Please visit the Instructions for Authors page before submitting a manuscript. The Article Processing Charge (APC) for publication in this open access journal is 2600 CHF (Swiss Francs). Submitted papers should be well formatted and use good English. Authors may use MDPI's English editing service prior to publication or during author revisions.

Keywords

  • active/passive 3D sensors
  • sensor calibration
  • stereo reconstruction
  • point cloud/mesh processing
  • geometry
  • modeling and representation
  • shape analysis and recognition
  • applications of 3D vision

Benefits of Publishing in a Special Issue

  • Ease of navigation: Grouping papers by topic helps scholars navigate broad scope journals more efficiently.
  • Greater discoverability: Special Issues support the reach and impact of scientific research. Articles in Special Issues are more discoverable and cited more frequently.
  • Expansion of research network: Special Issues facilitate connections among authors, fostering scientific collaborations.
  • External promotion: Articles in Special Issues are often promoted through the journal's social media, increasing their visibility.
  • e-Book format: Special Issues with more than 10 articles can be published as dedicated e-books, ensuring wide and rapid dissemination.

Further information on MDPI's Special Issue policies can be found here.

Related Special Issue

Published Papers (2 papers)

Order results
Result details
Select all
Export citation of selected articles as:

Research

24 pages, 9829 KiB  
Article
Multi-Camera Calibration Using Far-Range Dual-LED Wand and Near-Range Chessboard Fused in Bundle Adjustment
by Prayook Jatesiktat, Guan Ming Lim and Wei Tech Ang
Sensors 2024, 24(23), 7416; https://doi.org/10.3390/s24237416 - 21 Nov 2024
Cited by 1 | Viewed by 1035
Abstract
This paper presents a calibration approach for multiple synchronized global-shutter RGB cameras surrounding a large capture volume for 3D application. The calibration approach uses an active wand with two LED-embedded markers waved manually within the target capture volume. Data from the waving wand [...] Read more.
This paper presents a calibration approach for multiple synchronized global-shutter RGB cameras surrounding a large capture volume for 3D application. The calibration approach uses an active wand with two LED-embedded markers waved manually within the target capture volume. Data from the waving wand are combined with chessboard images taken at close range during each camera’s intrinsic calibration, optimizing camera parameters via our proposed bundle adjustment method. These additional constraints from the chessboard are developed to overcome an overfitting issue of wand-based calibration discovered by benchmarking its 3D triangulation accuracy in an independent record against a ground-truth trajectory and not on the record used for calibration itself. Addressing this overfitting issue in bundle adjustment leads to significant improvements in both 3D accuracy and result consistency. As a by-product of this development, a new benchmarking workflow and our calibration dataset that reflects realistic 3D accuracy are proposed and made publicly available to allow for fair comparisons of various calibration methods in the future. Additionally, our experiment highlights a significant benefit of a ray distance-based (RDB) triangulation formula over the popular direct linear transformation (DLT) method. Full article
(This article belongs to the Special Issue Sensing and Processing for 3D Computer Vision: 3rd Edition)
Show Figures

Figure 1

16 pages, 21787 KiB  
Article
Expanding Sparse Radar Depth Based on Joint Bilateral Filter for Radar-Guided Monocular Depth Estimation
by Chen-Chou Lo and Patrick Vandewalle
Sensors 2024, 24(6), 1864; https://doi.org/10.3390/s24061864 - 14 Mar 2024
Viewed by 1351
Abstract
Radar data can provide additional depth information for monocular depth estimation. It provides a cost-effective solution and is robust in various weather conditions, particularly when compared with lidar. Given the sparse and limited vertical field of view of radar signals, existing methods employ [...] Read more.
Radar data can provide additional depth information for monocular depth estimation. It provides a cost-effective solution and is robust in various weather conditions, particularly when compared with lidar. Given the sparse and limited vertical field of view of radar signals, existing methods employ either a vertical extension of radar points or the training of a preprocessing neural network to extend sparse radar points under lidar supervision. In this work, we present a novel radar expansion technique inspired by the joint bilateral filter, tailored for radar-guided monocular depth estimation. Our approach is motivated by the synergy of spatial and range kernels within the joint bilateral filter. Unlike traditional methods that assign a weighted average of nearby pixels to the current pixel, we expand sparse radar points by calculating a confidence score based on the values of spatial and range kernels. Additionally, we propose the use of a range-aware window size for radar expansion instead of a fixed window size in the image plane. Our proposed method effectively increases the number of radar points from an average of 39 points in a raw radar frame to an average of 100 K points. Notably, the expanded radar exhibits fewer intrinsic errors when compared with raw radar and previous methodologies. To validate our approach, we assess our proposed depth estimation model on the nuScenes dataset. Comparative evaluations with existing radar-guided depth estimation models demonstrate its state-of-the-art performance. Full article
(This article belongs to the Special Issue Sensing and Processing for 3D Computer Vision: 3rd Edition)
Show Figures

Figure 1

Back to TopTop