Special Issue "3D and Multimodal Image Acquisition Methods"

A special issue of Journal of Imaging (ISSN 2313-433X).

Deadline for manuscript submissions: 31 October 2020.

Special Issue Editor

Prof. Dr. Gunther Notni
Guest Editor
Group for Quality Assurance and Image Processing, Technical University Ilmenau, 98684 Ilmenau, Germany
Interests: 3d sensors; spectral and multimodal image acquisition; high-resolution surface and shape measuring methods; projection systems; optics; THz systems; machine learning

Special Issue Information

Dear Colleagues,

3d and multimodal imaging comprises the acquisition of a scene simultaneously with a 3d sensor system and cameras at different spectral ranges giving a variety of image modalities. Multimodal imaging refers to the simultaneous production of signals for more than one imaging technique. As a result, the object is described by its spatial 3d coordinates (point clouds), its temporal behavior, and, in addition, by further image modalities (for example, thermal image, multi-spectral image, and polarization image). This type of imaging is gaining more and more importance in a variety of applications. This includes, for example, applications in medicine, such as cancer detection and surgical robotics, medical diagnostics, e.g., contactless heart rate monitoring, biomedical application, precision agriculture, e.g., recognition of fruits and their automatic harvest, for autonomous systems (for fast object recognition), in forestry, robotics, optical sorting or food industry, to name but a few.

This is supported by the dynamic development of 3d sensors as well as cameras in different spectral ranges. In addition to camera systems in the visual and near-infrared range, this includes in particular cameras in the short-wave infrared (SWIR), thermal (FIR), and multispectral up to polarization cameras.

The rapid increase in the number of application areas requires the development of real-time 3d and multimodal image acquisition techniques. This enables direct process feedback or control of autonomous systems. Besides the actual system development, this includes, e.g., multi-camera arrangements, multi-aperture systems, new methods of system calibration up to data evaluation to enable a pixel-accurate superimposition of the image information. Furthermore, the data evaluation of multimodal image data streams (e.g., by means of CNN's) or derivation of novel segmentation methods for an adapted image data reduction plays an important role.

We are looking forward to contributions in which technical, methodological, and algorithmic approaches are presented that may contribute to the future development of 3d and multimodal imaging techniques. This is not limited to special application areas.

Prof. Dr. Gunther Notni
Guest Editor

Manuscript Submission Information

Manuscripts should be submitted online at www.mdpi.com by registering and logging in to this website. Once you are registered, click here to go to the submission form. Manuscripts can be submitted until the deadline. All papers will be peer-reviewed. Accepted papers will be published continuously in the journal (as soon as accepted) and will be listed together on the special issue website. Research articles, review articles as well as short communications are invited. For planned papers, a title and short abstract (about 100 words) can be sent to the Editorial Office for announcement on this website.

Submitted manuscripts should not have been published previously, nor be under consideration for publication elsewhere (except conference proceedings papers). All manuscripts are thoroughly refereed through a single-blind peer-review process. A guide for authors and other relevant information for submission of manuscripts is available on the Instructions for Authors page. Journal of Imaging is an international peer-reviewed open access monthly journal published by MDPI.

Please visit the Instructions for Authors page before submitting a manuscript. The Article Processing Charge (APC) for publication in this open access journal is 1000 CHF (Swiss Francs). Submitted papers should be well formatted and use good English. Authors may use MDPI's English editing service prior to publication or during author revisions.


  • real-time 3d sensors
  • multimodal imaging systems
  • multispectral cameras
  • polarization cameras
  • multi-aperture cameras
  • multimodal imaging systems for medicine, biomedical application, human–machine interaction, agriculture, forestry, production, robotics, and more
  • calibration techniques of multimodal imaging techniques
  • data analysis in multimodal imaging
  • deep learning/CNN´s in multimodal imaging

Published Papers (1 paper)

Order results
Result details
Select all
Export citation of selected articles as:


Open AccessArticle
body2vec: 3D Point Cloud Reconstruction for Precise Anthropometry with Handheld Devices
J. Imaging 2020, 6(9), 94; https://doi.org/10.3390/jimaging6090094 - 11 Sep 2020
Current point cloud extraction methods based on photogrammetry generate large amounts of spurious detections that hamper useful 3D mesh reconstructions or, even worse, the possibility of adequate measurements. Moreover, noise removal methods for point clouds are complex, slow and incapable to cope with [...] Read more.
Current point cloud extraction methods based on photogrammetry generate large amounts of spurious detections that hamper useful 3D mesh reconstructions or, even worse, the possibility of adequate measurements. Moreover, noise removal methods for point clouds are complex, slow and incapable to cope with semantic noise. In this work, we present body2vec, a model-based body segmentation tool that uses a specifically trained Neural Network architecture. Body2vec is capable to perform human body point cloud reconstruction from videos taken on hand-held devices (smartphones or tablets), achieving high quality anthropometric measurements. The main contribution of the proposed workflow is to perform a background removal step, thus avoiding the spurious points generation that is usual in photogrammetric reconstruction. A group of 60 persons were taped with a smartphone, and the corresponding point clouds were obtained automatically with standard photogrammetric methods. We used as a 3D silver standard the clean meshes obtained at the same time with LiDAR sensors post-processed and noise-filtered by expert anthropological biologists. Finally, we used as gold standard anthropometric measurements of the waist and hip of the same people, taken by expert anthropometrists. Applying our method to the raw videos significantly enhanced the quality of the results of the point cloud as compared with the LiDAR-based mesh, and of the anthropometric measurements as compared with the actual hip and waist perimeter measured by the anthropometrists. In both contexts, the resulting quality of body2vec is equivalent to the LiDAR reconstruction. Full article
(This article belongs to the Special Issue 3D and Multimodal Image Acquisition Methods)
Show Figures

Figure 1

Back to TopTop