Sign in to use this feature.

Years

Between: -

Subjects

remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline

Journals

remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline

Article Types

Countries / Regions

remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline

Search Results (758)

Search Parameters:
Keywords = metric reconstruction

Order results
Result details
Results per page
Select all
Export citation of selected articles as:
17 pages, 1097 KiB  
Article
Mapping Perfusion and Predicting Success: Infrared Thermography-Guided Perforator Flaps for Lower Limb Defects
by Abdalah Abu-Baker, Andrada-Elena Ţigăran, Teodora Timofan, Daniela-Elena Ion, Daniela-Elena Gheoca-Mutu, Adelaida Avino, Cristina-Nicoleta Marina, Adrian Daniel Tulin, Laura Raducu and Radu-Cristian Jecan
Medicina 2025, 61(8), 1410; https://doi.org/10.3390/medicina61081410 - 3 Aug 2025
Viewed by 125
Abstract
Background and Objectives: Lower limb defects often present significant reconstructive challenges due to limited soft tissue availability and exposure of critical structures. Perforator-based flaps offer reliable solutions, with minimal donor site morbidity. This study aimed to evaluate the efficacy of infrared thermography [...] Read more.
Background and Objectives: Lower limb defects often present significant reconstructive challenges due to limited soft tissue availability and exposure of critical structures. Perforator-based flaps offer reliable solutions, with minimal donor site morbidity. This study aimed to evaluate the efficacy of infrared thermography (IRT) in preoperative planning and postoperative monitoring of perforator-based flaps, assessing its accuracy in identifying perforators, predicting complications, and optimizing outcomes. Materials and Methods: A prospective observational study was conducted on 76 patients undergoing lower limb reconstruction with fascio-cutaneous perforator flaps between 2022 and 2024. Perforator mapping was performed concurrently with IRT and Doppler ultrasonography (D-US), with intraoperative confirmation. Flap design variables and systemic parameters were recorded. Postoperative monitoring employed thermal imaging on days 1 and 7. Outcomes were correlated with thermal, anatomical, and systemic factors using statistical analyses, including t-tests and Pearson correlation. Results: IRT showed high sensitivity (97.4%) and positive predictive value (96.8%) for perforator detection. A total of nine minor complications occurred, predominantly in patients with diabetes mellitus and/or elevated glycemia (p = 0.05). Larger flap-to-defect ratios (A/C and B/C) correlated with increased complications in propeller flaps, while smaller ratios posed risks for V-Y and Keystone flaps. Thermal analysis indicated significantly lower flap temperatures and greater temperature gradients in flaps with complications by postoperative day 7 (p < 0.05). CRP levels correlated with glycemia and white blood cell counts, highlighting systemic inflammation’s impact on outcomes. Conclusions: IRT proves to be a reliable, non-invasive method for perforator localization and flap monitoring, enhancing surgical planning and early complication detection. Combined with D-US, it improves perforator selection and perfusion assessment. Thermographic parameters, systemic factors, and flap design metrics collectively predict flap viability. Integration of IRT into surgical workflows offers a cost-effective tool for optimizing reconstructive outcomes in lower limb surgery. Full article
Show Figures

Figure 1

14 pages, 3219 KiB  
Article
Research on the Branch Road Traffic Flow Estimation and Main Road Traffic Flow Monitoring Optimization Problem
by Bingxian Wang and Sunxiang Zhu
Computation 2025, 13(8), 183; https://doi.org/10.3390/computation13080183 - 1 Aug 2025
Viewed by 203
Abstract
Main roads are usually equipped with traffic flow monitoring devices in the road network to record the traffic flow data of the main roads in real time. Three complex scenarios, i.e., Y-junctions, multi-lane merging, and signalized intersections, are considered in this paper by [...] Read more.
Main roads are usually equipped with traffic flow monitoring devices in the road network to record the traffic flow data of the main roads in real time. Three complex scenarios, i.e., Y-junctions, multi-lane merging, and signalized intersections, are considered in this paper by developing a novel modeling system that leverages only historical main-road data to reconstruct branch-road volumes and identify pivotal time points where instantaneous observations enable robust inference of period-aggregate traffic volumes. Four mathematical models (I–IV) are built using the data given in appendix, with performance quantified via error metrics (RMSE, MAE, MAPE) and stability indices (perturbation sensitivity index, structure similarity score). Finally, the significant traffic flow change points are further identified by the PELT algorithm. Full article
Show Figures

Figure 1

21 pages, 97817 KiB  
Article
Compression of 3D Optical Encryption Using Singular Value Decomposition
by Kyungtae Park, Min-Chul Lee and Myungjin Cho
Sensors 2025, 25(15), 4742; https://doi.org/10.3390/s25154742 - 1 Aug 2025
Viewed by 220
Abstract
In this paper, we propose a compressionmethod for optical encryption using singular value decomposition (SVD). Double random phase encryption (DRPE), which employs two distinct random phase masks, is adopted as the optical encryption technique. Since the encrypted data in DRPE have the same [...] Read more.
In this paper, we propose a compressionmethod for optical encryption using singular value decomposition (SVD). Double random phase encryption (DRPE), which employs two distinct random phase masks, is adopted as the optical encryption technique. Since the encrypted data in DRPE have the same size as the input data and consists of complex values, a compression technique is required to improve data efficiency. To address this issue, we introduce SVD as a compression method. SVD decomposes any matrix into simpler components, such as a unitary matrix, a rectangular diagonal matrix, and a complex unitary matrix. By leveraging this property, the encrypted data generated by DRPE can be effectively compressed. However, this compression may lead to some loss of information in the decrypted data. To mitigate this loss, we employ volumetric computational reconstruction based on integral imaging. As a result, the proposed method enhances the visual quality, compression ratio, and security of DRPE simultaneously. To validate the effectiveness of the proposed method, we conduct both computer simulations and optical experiments. The performance is evaluated quantitatively using peak signal-to-noise ratio (PSNR), structural similarity index (SSIM), and peak sidelobe ratio (PSR) as evaluation metrics. Full article
Show Figures

Figure 1

17 pages, 920 KiB  
Article
Enhancing Early GI Disease Detection with Spectral Visualization and Deep Learning
by Tsung-Jung Tsai, Kun-Hua Lee, Chu-Kuang Chou, Riya Karmakar, Arvind Mukundan, Tsung-Hsien Chen, Devansh Gupta, Gargi Ghosh, Tao-Yuan Liu and Hsiang-Chen Wang
Bioengineering 2025, 12(8), 828; https://doi.org/10.3390/bioengineering12080828 - 30 Jul 2025
Viewed by 424
Abstract
Timely and accurate diagnosis of gastrointestinal diseases (GIDs) remains a critical bottleneck in clinical endoscopy, particularly due to the limited contrast and sensitivity of conventional white light imaging (WLI) in detecting early-stage mucosal abnormalities. To overcome this, this research presents Spectrum Aided Vision [...] Read more.
Timely and accurate diagnosis of gastrointestinal diseases (GIDs) remains a critical bottleneck in clinical endoscopy, particularly due to the limited contrast and sensitivity of conventional white light imaging (WLI) in detecting early-stage mucosal abnormalities. To overcome this, this research presents Spectrum Aided Vision Enhancer (SAVE), an innovative, software-driven framework that transforms standard WLI into high-fidelity hyperspectral imaging (HSI) and simulated narrow-band imaging (NBI) without any hardware modification. SAVE leverages advanced spectral reconstruction techniques, including Macbeth Color Checker-based calibration, principal component analysis (PCA), and multivariate polynomial regression, achieving a root mean square error (RMSE) of 0.056 and structural similarity index (SSIM) exceeding 90%. Trained and validated on the Kvasir v2 dataset (n = 6490) using deep learning models like ResNet-50, ResNet-101, EfficientNet-B2, both EfficientNet-B5 and EfficientNetV2-B0 were used to assess diagnostic performance across six key GI conditions. Results demonstrated that SAVE enhanced imagery and consistently outperformed raw WLI across precision, recall, and F1-score metrics, with EfficientNet-B2 and EfficientNetV2-B0 achieving the highest classification accuracy. Notably, this performance gain was achieved without the need for specialized imaging hardware. These findings highlight SAVE as a transformative solution for augmenting GI diagnostics, with the potential to significantly improve early detection, streamline clinical workflows, and broaden access to advanced imaging especially in resource constrained settings. Full article
Show Figures

Figure 1

19 pages, 7161 KiB  
Article
Dynamic Snake Convolution Neural Network for Enhanced Image Super-Resolution
by Weiqiang Xin, Ziang Wu, Qi Zhu, Tingting Bi, Bing Li and Chunwei Tian
Mathematics 2025, 13(15), 2457; https://doi.org/10.3390/math13152457 - 30 Jul 2025
Viewed by 237
Abstract
Image super-resolution (SR) is essential for enhancing image quality in critical applications, such as medical imaging and satellite remote sensing. However, existing methods were often limited in their ability to effectively process and integrate multi-scales information from fine textures to global structures. To [...] Read more.
Image super-resolution (SR) is essential for enhancing image quality in critical applications, such as medical imaging and satellite remote sensing. However, existing methods were often limited in their ability to effectively process and integrate multi-scales information from fine textures to global structures. To address these limitations, this paper proposes DSCNN, a dynamic snake convolution neural network for enhanced image super-resolution. DSCNN optimizes feature extraction and network architecture to enhance both performance and efficiency: To improve feature extraction, the core innovation is a feature extraction and enhancement module with dynamic snake convolution that dynamically adjusts the convolution kernel’s shape and position to better fit the image’s geometric structures, significantly improving feature extraction. To optimize the network’s structure, DSCNN employs an enhanced residual network framework. This framework utilizes parallel convolutional layers and a global feature fusion mechanism to further strengthen feature extraction capability and gradient flow efficiency. Additionally, the network incorporates a SwishReLU-based activation function and a multi-scale convolutional concatenation structure. This multi-scale design effectively captures both local details and global image structure, enhancing SR reconstruction. In summary, the proposed DSCNN outperforms existing methods in both objective metrics and visual perception (e.g., our method achieved optimal PSNR and SSIM results on the Set5 ×4 dataset). Full article
(This article belongs to the Special Issue Structural Networks for Image Application)
Show Figures

Figure 1

21 pages, 711 KiB  
Systematic Review
Recent Developments in Image-Based 3D Reconstruction Using Deep Learning: Methodologies and Applications
by Diana-Carmen Rodríguez-Lira, Diana-Margarita Córdova-Esparza, Juan Terven, Julio-Alejandro Romero-González, José Manuel Alvarez-Alvarado, José-Joel González-Barbosa and Alfonso Ramírez-Pedraza
Electronics 2025, 14(15), 3032; https://doi.org/10.3390/electronics14153032 - 30 Jul 2025
Viewed by 402
Abstract
Three-dimensional (3D) reconstruction from images has significantly advanced due to recent developments in deep learning, yet methodological variations and diverse application contexts pose ongoing challenges. This systematic review examines the state-of-the-art deep learning techniques employed for image-based 3D reconstruction from 2019 to 2025. [...] Read more.
Three-dimensional (3D) reconstruction from images has significantly advanced due to recent developments in deep learning, yet methodological variations and diverse application contexts pose ongoing challenges. This systematic review examines the state-of-the-art deep learning techniques employed for image-based 3D reconstruction from 2019 to 2025. Through an extensive analysis of peer-reviewed studies, predominant methodologies, performance metrics, sensor types, and application domains are identified and assessed. Results indicate multi-view stereo and monocular depth estimation as prevailing methods, while hybrid architectures integrating classical and deep learning techniques demonstrate enhanced performance, especially in complex scenarios. Critical challenges remain, particularly in handling occlusions, low-texture areas, and varying lighting conditions, highlighting the importance of developing robust, adaptable models. Principal conclusions highlight the efficacy of integrated quantitative and qualitative evaluations, the advantages of hybrid methods, and the pressing need for computationally efficient and generalizable solutions suitable for real-world applications. Full article
(This article belongs to the Special Issue 3D Computer Vision and 3D Reconstruction)
Show Figures

Figure 1

20 pages, 2776 KiB  
Article
Automatic 3D Reconstruction: Mesh Extraction Based on Gaussian Splatting from Romanesque–Mudéjar Churches
by Nelson Montas-Laracuente, Emilio Delgado Martos, Carlos Pesqueira-Calvo, Giovanni Intra Sidola, Ana Maitín, Alberto Nogales and Álvaro José García-Tejedor
Appl. Sci. 2025, 15(15), 8379; https://doi.org/10.3390/app15158379 - 28 Jul 2025
Viewed by 240
Abstract
This research introduces an automated 3D virtual reconstruction system tailored for architectural heritage (AH) applications, contributing to the ongoing paradigm shift from traditional CAD-based workflows to artificial intelligence-driven methodologies. It reviews recent advancements in machine learning and deep learning—particularly neural radiance fields (NeRFs) [...] Read more.
This research introduces an automated 3D virtual reconstruction system tailored for architectural heritage (AH) applications, contributing to the ongoing paradigm shift from traditional CAD-based workflows to artificial intelligence-driven methodologies. It reviews recent advancements in machine learning and deep learning—particularly neural radiance fields (NeRFs) and its successor, Gaussian splatting (GS)—as state-of-the-art techniques in the domain. The study advocates for replacing point cloud data in heritage building information modeling workflows with image-based inputs, proposing a novel “photo-to-BIM” pipeline. A proof-of-concept system is presented, capable of processing photographs or video footage of ancient ruins—specifically, Romanesque–Mudéjar churches—to automatically generate 3D mesh reconstructions. The system’s performance is assessed using both objective metrics and subjective evaluations of mesh quality. The results confirm the feasibility and promise of image-based reconstruction as a viable alternative to conventional methods. The study successfully developed a system for automated 3D mesh reconstruction of AH from images. It applied GS and Mip-splatting for NeRFs, proving superior in noise reduction for subsequent mesh extraction via surface-aligned Gaussian splatting for efficient 3D mesh reconstruction. This photo-to-mesh pipeline signifies a viable step towards HBIM. Full article
Show Figures

Figure 1

28 pages, 3794 KiB  
Article
A Robust System for Super-Resolution Imaging in Remote Sensing via Attention-Based Residual Learning
by Rogelio Reyes-Reyes, Yeredith G. Mora-Martinez, Beatriz P. Garcia-Salgado, Volodymyr Ponomaryov, Jose A. Almaraz-Damian, Clara Cruz-Ramos and Sergiy Sadovnychiy
Mathematics 2025, 13(15), 2400; https://doi.org/10.3390/math13152400 - 25 Jul 2025
Viewed by 211
Abstract
Deep learning-based super-resolution (SR) frameworks are widely used in remote sensing applications. However, existing SR models still face limitations, particularly in recovering contours, fine features, and textures, as well as in effectively integrating channel information. To address these challenges, this study introduces a [...] Read more.
Deep learning-based super-resolution (SR) frameworks are widely used in remote sensing applications. However, existing SR models still face limitations, particularly in recovering contours, fine features, and textures, as well as in effectively integrating channel information. To address these challenges, this study introduces a novel residual model named OARN (Optimized Attention Residual Network) specifically designed to enhance the visual quality of low-resolution images. The network operates on the Y channel of the YCbCr color space and integrates LKA (Large Kernel Attention) and OCM (Optimized Convolutional Module) blocks. These components can restore large-scale spatial relationships and refine textures and contours, improving feature reconstruction without significantly increasing computational complexity. The performance of OARN was evaluated using satellite images from WorldView-2, GaoFen-2, and Microsoft Virtual Earth. Evaluation was conducted using objective quality metrics, such as Peak Signal-to-Noise Ratio (PSNR), Structural Similarity Index Measure (SSIM), Edge Preservation Index (EPI), and Perceptual Image Patch Similarity (LPIPS), demonstrating superior results compared to state-of-the-art methods in both objective measurements and subjective visual perception. Moreover, OARN achieves this performance while maintaining computational efficiency, offering a balanced trade-off between processing time and reconstruction quality. Full article
Show Figures

Figure 1

27 pages, 30210 KiB  
Article
Research on a Rapid Three-Dimensional Compressor Flow Field Prediction Method Integrating U-Net and Physics-Informed Neural Networks
by Chen Wang and Hongbing Ma
Mathematics 2025, 13(15), 2396; https://doi.org/10.3390/math13152396 - 25 Jul 2025
Viewed by 153
Abstract
This paper presents a neural network model, PINN-AeroFlow-U, for reconstructing full-field aerodynamic quantities around three-dimensional compressor blades, including regions near the wall. This model is based on structured CFD training data and physics-informed loss functions and is proposed for direct 3D compressor flow [...] Read more.
This paper presents a neural network model, PINN-AeroFlow-U, for reconstructing full-field aerodynamic quantities around three-dimensional compressor blades, including regions near the wall. This model is based on structured CFD training data and physics-informed loss functions and is proposed for direct 3D compressor flow prediction. It maps flow data from the physical domain to a uniform computational domain and employs a U-Net-based neural network capable of capturing the sharp local transitions induced by fluid acceleration near the blade leading edge, as well as learning flow features associated with internal boundaries (e.g., the wall boundary). The inputs to PINN-AeroFlow-U are the flow-field coordinate data from high-fidelity multi-geometry blade solutions, the 3D blade geometry, and the first-order metric coefficients obtained via mesh transformation. Its outputs include the pressure field, temperature field, and velocity vector field within the blade passage. To enhance physical interpretability, the network’s loss function incorporates both the Euler equations and gradient constraints. PINN-AeroFlow-U achieves prediction errors of 1.063% for the pressure field and 2.02% for the velocity field, demonstrating high accuracy. Full article
Show Figures

Figure 1

54 pages, 1242 KiB  
Review
Optical Sensor-Based Approaches in Obesity Detection: A Literature Review of Gait Analysis, Pose Estimation, and Human Voxel Modeling
by Sabrine Dhaouadi, Mohamed Moncef Ben Khelifa, Ala Balti and Pascale Duché
Sensors 2025, 25(15), 4612; https://doi.org/10.3390/s25154612 - 25 Jul 2025
Viewed by 241
Abstract
Optical sensor technologies are reshaping obesity detection by enabling non-invasive, dynamic analysis of biomechanical and morphological biomarkers. This review synthesizes recent advances in three key areas: optical gait analysis, vision-based pose estimation, and depth-sensing voxel modeling. Gait analysis leverages optical sensor arrays and [...] Read more.
Optical sensor technologies are reshaping obesity detection by enabling non-invasive, dynamic analysis of biomechanical and morphological biomarkers. This review synthesizes recent advances in three key areas: optical gait analysis, vision-based pose estimation, and depth-sensing voxel modeling. Gait analysis leverages optical sensor arrays and video systems to identify obesity-specific deviations, such as reduced stride length and asymmetric movement patterns. Pose estimation algorithms—including markerless frameworks like OpenPose and MediaPipe—track kinematic patterns indicative of postural imbalance and altered locomotor control. Human voxel modeling reconstructs 3D body composition metrics, such as waist–hip ratio, through infrared-depth sensing, offering precise, contactless anthropometry. Despite their potential, challenges persist in sensor robustness under uncontrolled environments, algorithmic biases in diverse populations, and scalability for widespread deployment in existing health workflows. Emerging solutions such as federated learning and edge computing aim to address these limitations by enabling multimodal data harmonization and portable, real-time analytics. Future priorities involve standardizing validation protocols to ensure reproducibility, optimizing cost-efficacy for scalable deployment, and integrating optical systems with wearable technologies for holistic health monitoring. By shifting obesity diagnostics from static metrics to dynamic, multidimensional profiling, optical sensing paves the way for scalable public health interventions and personalized care strategies. Full article
Show Figures

Figure 1

22 pages, 16961 KiB  
Article
Highly Accelerated Dual-Pose Medical Image Registration via Improved Differential Evolution
by Dibin Zhou, Fengyuan Xing, Wenhao Liu and Fuchang Liu
Sensors 2025, 25(15), 4604; https://doi.org/10.3390/s25154604 - 25 Jul 2025
Viewed by 206
Abstract
Medical image registration is an indispensable preprocessing step to align medical images to a common coordinate system before in-depth analysis. The registration precision is critical to the following analysis. In addition to representative image features, the initial pose settings and multiple poses in [...] Read more.
Medical image registration is an indispensable preprocessing step to align medical images to a common coordinate system before in-depth analysis. The registration precision is critical to the following analysis. In addition to representative image features, the initial pose settings and multiple poses in images will significantly affect the registration precision, which is largely neglected in state-of-the-art works. To address this, the paper proposes a dual-pose medical image registration algorithm based on improved differential evolution. More specifically, the proposed algorithm defines a composite similarity measurement based on contour points and utilizes this measurement to calculate the similarity between frontal–lateral positional DRR (Digitally Reconstructed Radiograph) images and X-ray images. In order to ensure the accuracy of the registration algorithm in particular dimensions, the algorithm implements a dual-pose registration strategy. A PDE (Phased Differential Evolution) algorithm is proposed for iterative optimization, enhancing the optimization algorithm’s ability to globally search in low-dimensional space, aiding in the discovery of global optimal solutions. Extensive experimental results demonstrate that the proposed algorithm provides more accurate similarity metrics compared to conventional registration algorithms; the dual-pose registration strategy largely reduces errors in specific dimensions, resulting in reductions of 67.04% and 71.84%, respectively, in rotation and translation errors. Additionally, the algorithm is more suitable for clinical applications due to its lower complexity. Full article
(This article belongs to the Special Issue Recent Advances in X-Ray Sensing and Imaging)
Show Figures

Figure 1

20 pages, 6563 KiB  
Article
Determining the Structural Characteristics of Farmland Shelterbelts in a Desert Oasis Using LiDAR
by Xiaoxiao Jia, Huijie Xiao, Zhiming Xin, Junran Li and Guangpeng Fan
Forests 2025, 16(8), 1221; https://doi.org/10.3390/f16081221 - 24 Jul 2025
Viewed by 177
Abstract
The structural analysis of shelterbelts forms the foundation of their planning and management, yet the scientific and effective quantification of shelterbelt structures requires further investigation. This study developed an innovative heterogeneous analytical framework, integrating three key methodologies: the LeWoS algorithm for wood–leaf separation, [...] Read more.
The structural analysis of shelterbelts forms the foundation of their planning and management, yet the scientific and effective quantification of shelterbelt structures requires further investigation. This study developed an innovative heterogeneous analytical framework, integrating three key methodologies: the LeWoS algorithm for wood–leaf separation, TreeQSM for structural reconstruction, and 3D alpha-shape spatial quantification, using terrestrial laser scanning (TLS) technology. This framework was applied to three typical farmland shelterbelts in the Ulan Buh Desert oasis, enabling the first precise quantitative characterization of structural components during the leaf-on stage. The results showed the following to be true: (1) The combined three-algorithm method achieved ≥90.774% relative accuracy in extracting structural parameters for all measured traits except leaf surface area. (2) Branch length, diameter, surface area, and volume decreased progressively from first- to fourth-order branches, while branch angles increased with ascending branch order. (3) The trunk, branch, and leaf components exhibited distinct vertical stratification. Trunk volume and surface area decreased linearly with height, while branch and leaf volumes and surface areas followed an inverted U-shaped distribution. (4) Horizontally, both surface area density (Scd) and volume density (Vcd) in each cube unit exhibited pronounced edge effects. Specifically, the Scd and Vcd were greatest between 0.33 and 0.60 times the shelterbelt’s height (H, i.e., mid-canopy). In contrast, the optical porosity (Op) was at a minimum of 0.43 H to 0.67 H, while the volumetric porosity (Vp) was at a minimum at 0.25 H to 0.50 H. (5) The proposed volumetric stratified porosity (Vsp) metric provides a scientific basis for regional farmland shelterbelt management strategies. This three-dimensional structural analytical framework enables precision silviculture, with particular relevance to strengthening ecological barrier efficacy in arid regions. Full article
(This article belongs to the Section Forest Inventory, Modeling and Remote Sensing)
Show Figures

Figure 1

26 pages, 15535 KiB  
Article
BCA-MVSNet: Integrating BIFPN and CA for Enhanced Detail Texture in Multi-View Stereo Reconstruction
by Ning Long, Zhengxu Duan, Xiao Hu and Mingju Chen
Electronics 2025, 14(15), 2958; https://doi.org/10.3390/electronics14152958 - 24 Jul 2025
Viewed by 168
Abstract
The 3D point cloud generated by MVSNet has good scene integrity but lacks sensitivity to details, causing holes and non-dense areas in flat and weak-texture regions. To address this problem and enhance the point cloud information of weak-texture areas, the BCA-MVSNet network is [...] Read more.
The 3D point cloud generated by MVSNet has good scene integrity but lacks sensitivity to details, causing holes and non-dense areas in flat and weak-texture regions. To address this problem and enhance the point cloud information of weak-texture areas, the BCA-MVSNet network is proposed in this paper. The network integrates the Bidirectional Feature Pyramid Network (BIFPN) into the feature processing of the MVSNet backbone network to accurately extract the features of weak-texture regions. In the feature map fusion stage, the Coordinate Attention (CA) mechanism is introduced into 3DU-Net to obtain the position information on the channel dimension related to the direction, improve the detail feature extraction, optimize the depth map and improve the depth accuracy. The experimental results show that BCA-MVSNet not only improves the accuracy of detail texture reconstruction, but also effectively controls the computational overhead. In the DTU dataset, the Overall and Comp metrics of BCA-MVSNet are reduced by 10.2% and 2.6%, respectively; in the Tanksand Temples dataset, the Mean metrics of the eight scenarios are improved by 6.51%. Three scenes are shot by binocular camera, and the reconstruction quality is excellent in the weak-texture area by combining the camera parameters and the BCA-MVSNet model. Full article
Show Figures

Figure 1

25 pages, 6911 KiB  
Article
Image Inpainting Algorithm Based on Structure-Guided Generative Adversarial Network
by Li Zhao, Tongyang Zhu, Chuang Wang, Feng Tian and Hongge Yao
Mathematics 2025, 13(15), 2370; https://doi.org/10.3390/math13152370 - 24 Jul 2025
Viewed by 321
Abstract
To address the challenges of image inpainting in scenarios with extensive or irregular missing regions—particularly detail oversmoothing, structural ambiguity, and textural incoherence—this paper proposes an Image Structure-Guided (ISG) framework that hierarchically integrates structural priors with semantic-aware texture synthesis. The proposed methodology advances a [...] Read more.
To address the challenges of image inpainting in scenarios with extensive or irregular missing regions—particularly detail oversmoothing, structural ambiguity, and textural incoherence—this paper proposes an Image Structure-Guided (ISG) framework that hierarchically integrates structural priors with semantic-aware texture synthesis. The proposed methodology advances a two-stage restoration paradigm: (1) Structural Prior Extraction, where adaptive edge detection algorithms identify residual contours in corrupted regions, and a transformer-enhanced network reconstructs globally consistent structural maps through contextual feature propagation; (2) Structure-Constrained Texture Synthesis, wherein a multi-scale generator with hybrid dilated convolutions and channel attention mechanisms iteratively refines high-fidelity textures under explicit structural guidance. The framework introduces three innovations: (1) a hierarchical feature fusion architecture that synergizes multi-scale receptive fields with spatial-channel attention to preserve long-range dependencies and local details simultaneously; (2) spectral-normalized Markovian discriminator with gradient-penalty regularization, enabling adversarial training stability while enforcing patch-level structural consistency; and (3) dual-branch loss formulation combining perceptual similarity metrics with edge-aware constraints to align synthesized content with both semantic coherence and geometric fidelity. Our experiments on the two benchmark datasets (Places2 and CelebA) have demonstrated that our framework achieves more unified textures and structures, bringing the restored images closer to their original semantic content. Full article
Show Figures

Figure 1

25 pages, 2129 KiB  
Article
Zero-Shot 3D Reconstruction of Industrial Assets: A Completion-to-Reconstruction Framework Trained on Synthetic Data
by Yongjie Xu, Haihua Zhu and Barmak Honarvar Shakibaei Asli
Electronics 2025, 14(15), 2949; https://doi.org/10.3390/electronics14152949 - 24 Jul 2025
Viewed by 239
Abstract
Creating high-fidelity digital twins (DTs) for Industry 4.0 applications, it is fundamentally reliant on the accurate 3D modeling of physical assets, a task complicated by the inherent imperfections of real-world point cloud data. This paper addresses the challenge of reconstructing accurate, watertight, and [...] Read more.
Creating high-fidelity digital twins (DTs) for Industry 4.0 applications, it is fundamentally reliant on the accurate 3D modeling of physical assets, a task complicated by the inherent imperfections of real-world point cloud data. This paper addresses the challenge of reconstructing accurate, watertight, and topologically sound 3D meshes from sparse, noisy, and incomplete point clouds acquired in complex industrial environments. We introduce a robust two-stage completion-to-reconstruction framework, C2R3D-Net, that systematically tackles this problem. The methodology first employs a pretrained, self-supervised point cloud completion network to infer a dense and structurally coherent geometric representation from degraded inputs. Subsequently, a novel adaptive surface reconstruction network generates the final high-fidelity mesh. This network features a hybrid encoder (FKAConv-LSA-DC), which integrates fixed-kernel and deformable convolutions with local self-attention to robustly capture both coarse geometry and fine details, and a boundary-aware multi-head interpolation decoder, which explicitly models sharp edges and thin structures to preserve geometric fidelity. Comprehensive experiments on the large-scale synthetic ShapeNet benchmark demonstrate state-of-the-art performance across all standard metrics. Crucially, we validate the framework’s strong zero-shot generalization capability by deploying the model—trained exclusively on synthetic data—to reconstruct complex assets from a custom-collected industrial dataset without any additional fine-tuning. The results confirm the method’s suitability as a robust and scalable approach for 3D asset modeling, a critical enabling step for creating high-fidelity DTs in demanding, unseen industrial settings. Full article
Show Figures

Figure 1

Back to TopTop