applsci-logo

Journal Browser

Journal Browser

Technical Advances in 3D Reconstruction

A special issue of Applied Sciences (ISSN 2076-3417). This special issue belongs to the section "Computing and Artificial Intelligence".

Deadline for manuscript submissions: 20 August 2025 | Viewed by 5892

Special Issue Editor


E-Mail Website
Guest Editor
School of Computer Science, Xi'an Jiaotong University, Xi'an 710049, China
Interests: 3D reconstruction; point cloud analysis; 3D content generation; interaction analysis; augmented reality

Special Issue Information

Dear Colleagues,

The task of 3D reconstruction involves creating 3D content or a representation of 3D content from 2D images or other data sources. With the development of deep learning techniques, implicit representations such as Nerf have attracted a lot of attention. Gaussian splatting has also become a popular new 3D representation. This Special Issue aims to present recent findings on the topic of 3D reconstruction and provide us with a fresh outlook on reconstruction-related tasks.

Potential topics include, but are not limited to, the following:

  • Point cloud reconstruction;
  • 3D scene completion;
  • 3D reconstruction from images or videos ;
  • 3D room layout generation;
  • Garment reconstruction;
  • 3D human pose estimation;
  • 3D wireframe reconstruction;
  • 3D shape representations.

Dr. Xi Zhao
Guest Editor

Manuscript Submission Information

Manuscripts should be submitted online at www.mdpi.com by registering and logging in to this website. Once you are registered, click here to go to the submission form. Manuscripts can be submitted until the deadline. All submissions that pass pre-check are peer-reviewed. Accepted papers will be published continuously in the journal (as soon as accepted) and will be listed together on the special issue website. Research articles, review articles as well as short communications are invited. For planned papers, a title and short abstract (about 100 words) can be sent to the Editorial Office for announcement on this website.

Submitted manuscripts should not have been published previously, nor be under consideration for publication elsewhere (except conference proceedings papers). All manuscripts are thoroughly refereed through a single-blind peer-review process. A guide for authors and other relevant information for submission of manuscripts is available on the Instructions for Authors page. Applied Sciences is an international peer-reviewed open access semimonthly journal published by MDPI.

Please visit the Instructions for Authors page before submitting a manuscript. The Article Processing Charge (APC) for publication in this open access journal is 2400 CHF (Swiss Francs). Submitted papers should be well formatted and use good English. Authors may use MDPI's English editing service prior to publication or during author revisions.

Keywords

  • 3D reconstruction
  • 3D content generation
  • shape representation

Benefits of Publishing in a Special Issue

  • Ease of navigation: Grouping papers by topic helps scholars navigate broad scope journals more efficiently.
  • Greater discoverability: Special Issues support the reach and impact of scientific research. Articles in Special Issues are more discoverable and cited more frequently.
  • Expansion of research network: Special Issues facilitate connections among authors, fostering scientific collaborations.
  • External promotion: Articles in Special Issues are often promoted through the journal's social media, increasing their visibility.
  • e-Book format: Special Issues with more than 10 articles can be published as dedicated e-books, ensuring wide and rapid dissemination.

Further information on MDPI's Special Issue policies can be found here.

Published Papers (5 papers)

Order results
Result details
Select all
Export citation of selected articles as:

Research

19 pages, 13301 KiB  
Article
Per-Pixel Manifold-Based Color Calibration Technique
by Stanisław Gardecki, Krzysztof Wegner, Tomasz Grajek and Krzysztof Klimaszewski
Appl. Sci. 2025, 15(6), 3128; https://doi.org/10.3390/app15063128 - 13 Mar 2025
Viewed by 281
Abstract
In this paper, we present a method for obtaining a manifold color correction transform for multiview images. The method can be applied in various scenarios, for correcting the colors of stitched images, adjusting the colors of images obtained in different lighting conditions, and [...] Read more.
In this paper, we present a method for obtaining a manifold color correction transform for multiview images. The method can be applied in various scenarios, for correcting the colors of stitched images, adjusting the colors of images obtained in different lighting conditions, and performing virtual view synthesis based on images taken by different cameras or in different conditions. The provided derivation allows us to use the method to correct regular RGB images. The provided solution is specified as a transform matrix that provides the pixel-specific color transformation for each pixel and therefore is more general than the methods described in the literature, which only provide the transformed images without explicitly providing the transform. By providing the transform for each pixel separately, we can introduce a smoothness constraint based on the transformation similarity for neighboring pixels, a feature that is not present in the available literature. Full article
(This article belongs to the Special Issue Technical Advances in 3D Reconstruction)
Show Figures

Figure 1

22 pages, 6319 KiB  
Article
Sparse Indoor Camera Positioning with Fiducial Markers
by Pablo García-Ruiz, Francisco J. Romero-Ramirez, Rafael Muñoz-Salinas, Manuel J. Marín-Jiménez and Rafael Medina-Carnicer
Appl. Sci. 2025, 15(4), 1855; https://doi.org/10.3390/app15041855 - 11 Feb 2025
Viewed by 527
Abstract
Accurately estimating the pose of large arrays of fixed indoor cameras presents a significant challenge in computer vision, especially since traditional methods predominantly rely on overlapping camera views. Existing approaches for positioning non-overlapping cameras are scarce and generally limited to simplistic scenarios dependent [...] Read more.
Accurately estimating the pose of large arrays of fixed indoor cameras presents a significant challenge in computer vision, especially since traditional methods predominantly rely on overlapping camera views. Existing approaches for positioning non-overlapping cameras are scarce and generally limited to simplistic scenarios dependent on specific environmental features, thereby leaving a significant gap in applications for large and complex settings. To bridge this gap, this paper introduces a novel methodology that effectively positions cameras with and without overlapping views in complex indoor scenarios. This approach leverages a subset of fiducial markers printed on regular paper, strategically placed and relocated across the environment and recorded by an additional mobile camera to progressively establish connections among all fixed cameras without necessitating overlapping views. Our method employs a comprehensive optimization process that minimizes the reprojection errors of observed markers while applying physical constraints such as camera and marker coplanarity and the use of a set of control points. To validate our approach, we have developed novel datasets specifically designed to assess the performance of our system in positioning cameras without overlapping fields of view. Demonstrating superior performance over existing techniques, our methodology establishes a new state-of-the-art for positioning cameras with and without overlapping views. This system not only expands the applicability of camera pose estimation technologies but also provides a practical solution for indoor settings without the need for overlapping views, supported by accessible resources, including code, datasets, and a tutorial to facilitate its deployment and adaptation. Full article
(This article belongs to the Special Issue Technical Advances in 3D Reconstruction)
Show Figures

Figure 1

25 pages, 8016 KiB  
Article
High-Fold 3D Gaussian Splatting Model Pruning Method Assisted by Opacity
by Shiyu Qiu, Chunlei Wu, Zhenghao Wan and Siyuan Tong
Appl. Sci. 2025, 15(3), 1535; https://doi.org/10.3390/app15031535 - 3 Feb 2025
Viewed by 1314
Abstract
Recent advancements in 3D scene representation have underscored the potential of Neural Radiance Fields (NeRFs) for producing high-fidelity renderings of complex scenes. However, NeRFs are hindered by the significant computational burden of volumetric rendering. To address this, 3D Gaussian Splatting (3DGS) has emerged [...] Read more.
Recent advancements in 3D scene representation have underscored the potential of Neural Radiance Fields (NeRFs) for producing high-fidelity renderings of complex scenes. However, NeRFs are hindered by the significant computational burden of volumetric rendering. To address this, 3D Gaussian Splatting (3DGS) has emerged as an efficient alternative, utilizing Gaussian-based representations and rasterization techniques to achieve faster rendering speeds without sacrificing image quality. Despite these advantages, the large number of Gaussian points and associated internal parameters result in high storage demands. To address this challenge, we propose a pruning strategy applied during the Gaussian densification and pruning phases. Our approach integrates learnable Gaussian masks with a contribution-based pruning mechanism, further enhanced by an opacity update strategy to facilitate the pruning process. This method effectively eliminates redundant Gaussian points and those with minimal contributions to scene construction. Additionally, during the Gaussian parameter compression phase, we employ a combination of teacher–student models and vector quantization to compress the spherical harmonic coefficients. Extensive experimental results demonstrate that our approach reduces the storage requirements of original 3D Gaussian models by over 30 times, with only a minor degradation in rendering quality. Full article
(This article belongs to the Special Issue Technical Advances in 3D Reconstruction)
Show Figures

Figure 1

16 pages, 5987 KiB  
Article
From Single Shot to Structure: End-to-End Network-Based Deflectometry for Specular Free-Form Surface Reconstruction
by M.Hadi Sepanj, Saed Moradi, Amir Nazemi, Claire Preston, Anthony M. D. Lee and Paul Fieguth
Appl. Sci. 2024, 14(23), 10824; https://doi.org/10.3390/app142310824 - 22 Nov 2024
Viewed by 1540
Abstract
Deflectometry is a key component in the precise measurement of specular (mirrored) surfaces; however, traditional methods often lack an end-to-end approach that performs 3D reconstruction in a single shot with high accuracy and generalizes across different free-form surfaces. This paper introduces a novel [...] Read more.
Deflectometry is a key component in the precise measurement of specular (mirrored) surfaces; however, traditional methods often lack an end-to-end approach that performs 3D reconstruction in a single shot with high accuracy and generalizes across different free-form surfaces. This paper introduces a novel deep neural network (DNN)-based approach for end-to-end 3D reconstruction of free-form specular surfaces using single-shot deflectometry. Our proposed network, VUDNet, innovatively combines discriminative and generative components to accurately interpret orthogonal fringe patterns and generate high-fidelity 3D surface reconstructions. By leveraging a hybrid architecture integrating a Variational Autoencoder (VAE) and a modified U-Net, VUDNet excels in both depth estimation and detail refinement, achieving superior performance in challenging environments. Extensive data simulation using Blender leading to a dataset which we will make available, ensures robust training and enables the network to generalize across diverse scenarios. Experimental results demonstrate the strong performance of VUDNet, setting a new standard for 3D surface reconstruction. Full article
(This article belongs to the Special Issue Technical Advances in 3D Reconstruction)
Show Figures

Figure 1

21 pages, 7110 KiB  
Article
Pose Tracking and Object Reconstruction Based on Occlusion Relationships in Complex Environments
by Xi Zhao, Yuekun Zhang and Yaqing Zhou
Appl. Sci. 2024, 14(20), 9355; https://doi.org/10.3390/app14209355 - 14 Oct 2024
Viewed by 1476
Abstract
For the reconstruction of objects during hand–object interactions, accurate pose estimation is indispensable. By improving the precision of pose estimation, the accuracy of the 3D reconstruction results can be enhanced. Recently, pose tracking techniques are no longer limited to individual objects, leading to [...] Read more.
For the reconstruction of objects during hand–object interactions, accurate pose estimation is indispensable. By improving the precision of pose estimation, the accuracy of the 3D reconstruction results can be enhanced. Recently, pose tracking techniques are no longer limited to individual objects, leading to advancements in the reconstruction of objects interacting with other objects. However, most methods struggle to handle incomplete target information in complex scenes and mutual interference between objects in the environment, leading to a decrease in pose estimation accuracy. We proposed an improved algorithm building upon the existing BundleSDF framework, which enables more robust and accurate tracking by considering the occlusion relationships between objects. First of all, for detecting changes in occlusion relationships, we segment the target and compute dual-layer masks. Secondly, rough pose estimation is performed through feature matching, and a keyframe pool is introduced for pose optimization, which is maintained based on occlusion relationships. Lastly, the estimated results of historical frames are used to train an object neural field to assist in the subsequent pose-tracking process. Experimental verification shows that on the HO-3D dataset, our method can significantly improve the accuracy and robustness of object tracking in frequent interactions, providing new ideas for object pose-tracking tasks in complex scenes. Full article
(This article belongs to the Special Issue Technical Advances in 3D Reconstruction)
Show Figures

Figure 1

Back to TopTop