Sign in to use this feature.

Years

Between: -

Subjects

remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline

Journals

remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline

Article Types

Countries / Regions

remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline

Search Results (300)

Search Parameters:
Keywords = reconstruction registration

Order results
Result details
Results per page
Select all
Export citation of selected articles as:
30 pages, 6195 KiB  
Article
Digital Inspection Technology for Sheet Metal Parts Using 3D Point Clouds
by Jian Guo, Dingzhong Tan, Shizhe Guo, Zheng Chen and Rang Liu
Sensors 2025, 25(15), 4827; https://doi.org/10.3390/s25154827 - 6 Aug 2025
Abstract
To solve the low efficiency of traditional sheet metal measurement, this paper proposes a digital inspection method for sheet metal parts based on 3D point clouds. The 3D point cloud data of sheet metal parts are collected using a 3D laser scanner, and [...] Read more.
To solve the low efficiency of traditional sheet metal measurement, this paper proposes a digital inspection method for sheet metal parts based on 3D point clouds. The 3D point cloud data of sheet metal parts are collected using a 3D laser scanner, and the topological relationship is established by using a K-dimensional tree (KD tree). The pass-through filtering method is adopted to denoise the point cloud data. To preserve the fine features of the parts, an improved voxel grid method is proposed for the downsampling of the point cloud data. Feature points are extracted via the intrinsic shape signatures (ISS) algorithm and described using the fast point feature histograms (FPFH) algorithm. After rough registration with the sample consensus initial alignment (SAC-IA) algorithm, an initial position is provided for fine registration. The improved iterative closest point (ICP) algorithm, used for fine registration, can enhance the registration accuracy and efficiency. The greedy projection triangulation algorithm optimized by moving least squares (MLS) smoothing ensures surface smoothness and geometric accuracy. The reconstructed 3D model is projected onto a 2D plane, and the actual dimensions of the parts are calculated based on the pixel values of the sheet metal parts and the conversion scale. Experimental results show that the measurement error of this inspection system for three sheet metal workpieces ranges from 0.1416 mm to 0.2684 mm, meeting the accuracy requirement of ±0.3 mm. This method provides a reliable digital inspection solution for sheet metal parts. Full article
(This article belongs to the Section Industrial Sensors)
Show Figures

Figure 1

20 pages, 8574 KiB  
Article
FPCR-Net: Front Point Cloud Regression Network for End-to-End SMPL Parameter Estimation
by Xihang Li, Xianguo Cheng, Fang Chen, Furui Shi and Ming Li
Sensors 2025, 25(15), 4808; https://doi.org/10.3390/s25154808 - 5 Aug 2025
Abstract
Due to the challenges in obtaining full-body point clouds and the time-consuming registration of parametric body models, we propose an end-to-end Front Point Cloud Parametric Body Regression Network (FPCR-Net). This network directly regresses the pose and shape parameters of a parametric body model [...] Read more.
Due to the challenges in obtaining full-body point clouds and the time-consuming registration of parametric body models, we propose an end-to-end Front Point Cloud Parametric Body Regression Network (FPCR-Net). This network directly regresses the pose and shape parameters of a parametric body model from a single front point cloud of the human body. The network first predicts the label probabilities of corresponding body parts and the back point cloud from the input front point cloud. Then, it extracts equivariant features from both the front and predicted back point clouds, which are concatenated into global point cloud equivariant features. For pose prediction, part-level equivariant feature aggregation is performed using the predicted part label probabilities, and the rotations of each joint in the parametric body model are predicted via a self-attention layer. Shape prediction is achieved by applying mean pooling to part-invariant features and estimating the shape parameters using a self-attention mechanism. Experimental results, both qualitative and quantitative, demonstrate that our method achieves comparable accuracy in reconstructing body models from front point clouds when compared to implicit representation-based methods. Moreover, compared to previous regression-based methods, vertex and joint position errors are reduced by 43.2% and 45.0%, respectively, relative to the baseline. Full article
(This article belongs to the Section Intelligent Sensors)
Show Figures

Figure 1

22 pages, 16961 KiB  
Article
Highly Accelerated Dual-Pose Medical Image Registration via Improved Differential Evolution
by Dibin Zhou, Fengyuan Xing, Wenhao Liu and Fuchang Liu
Sensors 2025, 25(15), 4604; https://doi.org/10.3390/s25154604 - 25 Jul 2025
Viewed by 206
Abstract
Medical image registration is an indispensable preprocessing step to align medical images to a common coordinate system before in-depth analysis. The registration precision is critical to the following analysis. In addition to representative image features, the initial pose settings and multiple poses in [...] Read more.
Medical image registration is an indispensable preprocessing step to align medical images to a common coordinate system before in-depth analysis. The registration precision is critical to the following analysis. In addition to representative image features, the initial pose settings and multiple poses in images will significantly affect the registration precision, which is largely neglected in state-of-the-art works. To address this, the paper proposes a dual-pose medical image registration algorithm based on improved differential evolution. More specifically, the proposed algorithm defines a composite similarity measurement based on contour points and utilizes this measurement to calculate the similarity between frontal–lateral positional DRR (Digitally Reconstructed Radiograph) images and X-ray images. In order to ensure the accuracy of the registration algorithm in particular dimensions, the algorithm implements a dual-pose registration strategy. A PDE (Phased Differential Evolution) algorithm is proposed for iterative optimization, enhancing the optimization algorithm’s ability to globally search in low-dimensional space, aiding in the discovery of global optimal solutions. Extensive experimental results demonstrate that the proposed algorithm provides more accurate similarity metrics compared to conventional registration algorithms; the dual-pose registration strategy largely reduces errors in specific dimensions, resulting in reductions of 67.04% and 71.84%, respectively, in rotation and translation errors. Additionally, the algorithm is more suitable for clinical applications due to its lower complexity. Full article
(This article belongs to the Special Issue Recent Advances in X-Ray Sensing and Imaging)
Show Figures

Figure 1

21 pages, 2469 KiB  
Article
Robust Low-Overlap Point Cloud Registration via Displacement-Corrected Geometric Consistency for Enhanced 3D Sensing
by Xin Wang and Qingguang Li
Sensors 2025, 25(14), 4332; https://doi.org/10.3390/s25144332 - 11 Jul 2025
Viewed by 399
Abstract
Accurate alignment of 3D point clouds, achieved by ubiquitous sensors such as LiDAR and depth cameras, is critical for enhancing perception capabilities in robotics, autonomous navigation, and environmental reconstruction. However, low-overlap scenarios—common due to limited sensor field-of-view or occlusions—severely degrade registration robustness and [...] Read more.
Accurate alignment of 3D point clouds, achieved by ubiquitous sensors such as LiDAR and depth cameras, is critical for enhancing perception capabilities in robotics, autonomous navigation, and environmental reconstruction. However, low-overlap scenarios—common due to limited sensor field-of-view or occlusions—severely degrade registration robustness and sensing reliability. To address this challenge, this paper proposes a novel geometric consistency optimization and rectification deep learning network named GeoCORNet. By synergistically designing a geometric consistency enhancement module, a bidirectional cross-attention mechanism, a predictive displacement rectification strategy, and joint optimization of overlap loss with displacement loss, GeoCORNet significantly improves registration accuracy and robustness in complex scenarios. The Attentive Cross-Consistency module of GeoCORNet integrates distance and angular consistency constraints with bidirectional cross-attention to significantly suppress noise from non-overlapping regions while reinforcing geometric coherence in overlapping areas. The predictive displacement rectification strategy dynamically rectifies erroneous correspondences through predicted 3D displacements instead of discarding them, maximizing the utility of sparse sensor data. Furthermore, a novel displacement loss function was developed to effectively constrain the geometric distribution of corrected point-pairs. Experimental results demonstrate that our method outperformed existing approaches in the aspects of registration recall, rotation error, and algorithm robustness under low-overlap conditions. These advances establish a new paradigm for robust 3D sensing in real-world applications where partial sensor data is prevalent. Full article
(This article belongs to the Section Sensing and Imaging)
Show Figures

Figure 1

17 pages, 740 KiB  
Systematic Review
Accompanying Titanium Meshes and Titanium-Reinforced Membranes with Collagen Membranes in Vertical Alveolar Ridge Augmentations: A Systematic Review
by Amir-Ali Yousefi-Koma, Reza Amid, Anahita Moscowchi, Hanieh Nokhbatolfoghahaei and Mahdi Kadkhodazadeh
J. Funct. Biomater. 2025, 16(7), 246; https://doi.org/10.3390/jfb16070246 - 4 Jul 2025
Viewed by 752
Abstract
Background: Vertical ridge augmentations (VRAs), including guided bone regeneration (GBR) techniques, have been utilized in the reconstruction of deficient alveolar ridges for quite some time. GBR-based VRA procedures are technique-sensitive, operator-dependent, and often lead to complications detected during or after the treatment. The [...] Read more.
Background: Vertical ridge augmentations (VRAs), including guided bone regeneration (GBR) techniques, have been utilized in the reconstruction of deficient alveolar ridges for quite some time. GBR-based VRA procedures are technique-sensitive, operator-dependent, and often lead to complications detected during or after the treatment. The main objective of this systematic review was to include randomized and non-randomized human studies that investigated the regenerative outcome differences, as well as the incidence rates of healing and surgical complications of titanium meshes and/or titanium-reinforced membranes with and without collagen membranes utilized in GBR-based VRA. Methods: This systematic review has been prepared and organized according to the preferred reporting items for systematic reviews and meta-analyses (PRISMA) 2020 guidelines and is registered at PROSPERO (Registration ID: CRD420251002615). Medline via PubMed, Scopus, Web of Science, Embase, and the Cochrane Library were searched for eligible studies up to 5 June 2025. Randomized and non-randomized human clinical studies, except for case reports, focused on applying titanium meshes or titanium-reinforced membranes with or without collagen membranes in GBR-based VRA, were eligible. Results: A total of 119 patients from three human randomized clinical trials (RCTs) and one case series reported across nine articles were included. The addition of collagen membranes causes no significant differences in vertical bone gain or surgical/healing complication rates. Conclusions: The addition of collagen membranes on top of titanium meshes and titanium-reinforced membranes might not be necessary in GBR-based VRA. Further human RCTs are required to reach a reliable conclusion. Full article
(This article belongs to the Section Dental Biomaterials)
Show Figures

Graphical abstract

15 pages, 1590 KiB  
Article
A User-Friendly Software for Automated Knowledge-Based Virtual Surgical Planning in Mandibular Reconstruction
by Niclas Hagen, Christian Freudlsperger, Reinald Peter Kühle, Frederic Bouffleur, Petra Knaup, Jürgen Hoffmann and Urs Eisenmann
J. Clin. Med. 2025, 14(13), 4508; https://doi.org/10.3390/jcm14134508 - 25 Jun 2025
Viewed by 381
Abstract
Background/Objectives: Virtual surgical planning (VSP) has become the gold standard in mandibular reconstructions with autografts. While commercial services are available, efforts are under way to address their shortcomings, which may include inefficiency, inconvenience, and susceptibility to error. We developed a novel approach [...] Read more.
Background/Objectives: Virtual surgical planning (VSP) has become the gold standard in mandibular reconstructions with autografts. While commercial services are available, efforts are under way to address their shortcomings, which may include inefficiency, inconvenience, and susceptibility to error. We developed a novel approach to calculate knowledge-based reconstruction proposals. The objective of our work is to implement software for automated VSP and to evaluate it on retrospective clinical cases. Methods: We developed software, which incorporates registration of a naturally shaped mandible, tumor resection planning, knowledge-based calculation of reconstruction proposals, and manual refinement of proposals. Three surgeons planned 21 retrospective clinical cases utilizing our software. They rated its usability via the System Usability Scale (SUS) and rated the quality of the proposed reconstructions and the final surgical plan via a five-point Likert scale (1: totally disagree–5: totally agree). Results: Surgeons rated the usability with an average SUS score of 76.7. Times for VSP were consistently less than 20 min. The surgeons agreed with the proposals with a mean value of 4.7 ± 0.4. In 15 cases they made minor refinements. Finally, they agreed with the final surgical plan in twenty cases (score of 5) and with minor discrepancies in one case (score of 4). Conclusions: We developed an easy-to-use software for the automated VSP of mandibular reconstructions with autografts. The results demonstrate that reconstruction proposals can be calculated efficiently based on standardized rules. Our system allows surgeons to autonomously derive, compare, and rapidly refine high-quality reconstruction proposals based on key decisions. Full article
(This article belongs to the Special Issue State-of-the-Art Innovations in Oral and Maxillofacial Surgery)
Show Figures

Graphical abstract

24 pages, 1151 KiB  
Article
EKNet: Graph Structure Feature Extraction and Registration for Collaborative 3D Reconstruction in Architectural Scenes
by Changyu Qian, Hanqiang Deng, Xiangrong Ni, Dong Wang, Bangqi Wei, Hao Chen and Jian Huang
Appl. Sci. 2025, 15(13), 7133; https://doi.org/10.3390/app15137133 - 25 Jun 2025
Viewed by 289
Abstract
Collaborative geometric reconstruction of building structures can significantly reduce communication consumption for data sharing, protect privacy, and provide support for large-scale robot application management. In recent years, geometric reconstruction of building structures has been partially studied, but there is a lack of alignment [...] Read more.
Collaborative geometric reconstruction of building structures can significantly reduce communication consumption for data sharing, protect privacy, and provide support for large-scale robot application management. In recent years, geometric reconstruction of building structures has been partially studied, but there is a lack of alignment fusion studies for multi-UAV (Unmanned Aerial Vehicle)-reconstructed geometric structure models. The vertices and edges of geometric structure models are sparse, and existing methods face challenges such as low feature extraction efficiency and substantial data requirements when processing sparse graph structures after geometrization. To address these challenges, this paper proposes an efficient deep graph matching registration framework that effectively integrates interpretable feature extraction with network training. Specifically, we first extract multidimensional local properties of nodes by combining geometric features with complex network features. Next, we construct a lightweight graph neural network, named EKNet, to enhance feature representation capabilities, enabling improved performance in low-overlap registration scenarios. Finally, through feature matching and discrimination modules, we effectively eliminate incorrect pairings and enhance accuracy. Experiments demonstrate that the proposed method achieves a 27.28% improvement in registration speed compared to traditional GCN (Graph Convolutional Neural Networks) and an 80.66% increase in registration accuracy over the suboptimal method. The method exhibits strong robustness in registration for scenes with high noise and low overlap rates. Additionally, we construct a standardized geometric point cloud registration dataset. Full article
Show Figures

Figure 1

28 pages, 11793 KiB  
Article
Unsupervised Multimodal UAV Image Registration via Style Transfer and Cascade Network
by Xiaoye Bi, Rongkai Qie, Chengyang Tao, Zhaoxiang Zhang and Yuelei Xu
Remote Sens. 2025, 17(13), 2160; https://doi.org/10.3390/rs17132160 - 24 Jun 2025
Cited by 1 | Viewed by 409
Abstract
Cross-modal image registration for unmanned aerial vehicle (UAV) platforms presents significant challenges due to large-scale deformations, distinct imaging mechanisms, and pronounced modality discrepancies. This paper proposes a novel multi-scale cascaded registration network based on style transfer that achieves superior performance: up to 67% [...] Read more.
Cross-modal image registration for unmanned aerial vehicle (UAV) platforms presents significant challenges due to large-scale deformations, distinct imaging mechanisms, and pronounced modality discrepancies. This paper proposes a novel multi-scale cascaded registration network based on style transfer that achieves superior performance: up to 67% reduction in mean squared error (from 0.0106 to 0.0068), 9.27% enhancement in normalized cross-correlation, 26% improvement in local normalized cross-correlation, and 8% increase in mutual information compared to state-of-the-art methods. The architecture integrates a cross-modal style transfer network (CSTNet) that transforms visible images into pseudo-infrared representations to unify modality characteristics, and a multi-scale cascaded registration network (MCRNet) that performs progressive spatial alignment across multiple resolution scales using diffeomorphic deformation modeling to ensure smooth and invertible transformations. A self-supervised learning paradigm based on image reconstruction eliminates reliance on manually annotated data while maintaining registration accuracy through synthetic deformation generation. Extensive experiments on the LLVIP dataset demonstrate the method’s robustness under challenging conditions involving large-scale transformations, with ablation studies confirming that style transfer contributes 28% MSE improvement and diffeomorphic registration prevents 10.6% performance degradation. The proposed approach provides a robust solution for cross-modal image registration in dynamic UAV environments, offering significant implications for downstream applications such as target detection, tracking, and surveillance. Full article
(This article belongs to the Special Issue Advances in Deep Learning Approaches: UAV Data Analysis)
Show Figures

Graphical abstract

22 pages, 4943 KiB  
Article
Towards MR-Only Radiotherapy in Head and Neck: Generation of Synthetic CT from Zero-TE MRI Using Deep Learning
by Souha Aouadi, Mojtaba Barzegar, Alla Al-Sabahi, Tarraf Torfeh, Satheesh Paloor, Mohamed Riyas, Palmira Caparrotti, Rabih Hammoud and Noora Al-Hammadi
Information 2025, 16(6), 477; https://doi.org/10.3390/info16060477 - 6 Jun 2025
Viewed by 1189
Abstract
This study investigates the generation of synthetic CT (sCT) images from zero echo time (ZTE) MRI to support MR-only radiotherapy, which can reduce image registration errors and lower treatment planning costs. Since MRI lacks the electron density data required for accurate dose calculations, [...] Read more.
This study investigates the generation of synthetic CT (sCT) images from zero echo time (ZTE) MRI to support MR-only radiotherapy, which can reduce image registration errors and lower treatment planning costs. Since MRI lacks the electron density data required for accurate dose calculations, generating reliable sCTs is essential. ZTE MRI, offering high bone contrast, was used with two deep learning models: attention deep residual U-Net (ADR-Unet) and derived conditional generative adversarial network (cGAN). Data from 17 head and neck cancer patients were used to train and evaluate the models. ADR-Unet was enhanced with deep residual blocks and attention mechanisms to improve learning and reconstruction quality. Both models were implemented in-house and compared to standard U-Net and Unet++ architectures using image quality metrics, visual inspection, and dosimetric analysis. Volumetric modulated arc therapy (VMAT) planning was performed on both planning CT and generated sCTs. ADR-Unet achieved a mean absolute error of 55.49 HU and a Dice score of 0.86 for bone structures. All the models demonstrated Gamma pass rates above 99.4% and dose deviations within 2–3%, confirming clinical acceptability. These results highlight ADR-Unet and cGAN as promising solutions for accurate sCT generation, enabling effective MR-only radiotherapy. Full article
Show Figures

Figure 1

11 pages, 2032 KiB  
Communication
Super-Resolution Reconstruction of LiDAR Images Based on an Adaptive Contour Closure Algorithm over 10 km
by Liang Shi, Xinyuan Zhang, Fei Han, Yicheng Wang, Shilong Xu, Xing Yang and Yihua Hu
Photonics 2025, 12(6), 569; https://doi.org/10.3390/photonics12060569 - 5 Jun 2025
Viewed by 435
Abstract
Reflective Tomography LiDAR (RTL) imaging, an innovative LiDAR technology, offers the significant advantage of an imaging resolution independent of detection distance and receiving optical aperture, evolving from Computed Tomography (CT) principles. However, distinct from transmissive imaging, RTL requires precise alignment of multi-angle echo [...] Read more.
Reflective Tomography LiDAR (RTL) imaging, an innovative LiDAR technology, offers the significant advantage of an imaging resolution independent of detection distance and receiving optical aperture, evolving from Computed Tomography (CT) principles. However, distinct from transmissive imaging, RTL requires precise alignment of multi-angle echo data around the target’s rotation center before image reconstruction. This paper presents an adaptive contour closure algorithm for automated multi-angle echo data registration in RTL. A 10.38 km remote RTL imaging experiment validates the algorithm’s efficacy, showing that it improves the quality factor of reconstructed images by over 23% and effectively suppresses interference from target/detector jitter, laser pulse transmission/reception fluctuations, and atmospheric turbulence. These results support the development of advanced space target perception capabilities and drive the transition of space-based LiDAR from “point” measurements to “volumetric” perception, marking a crucial advancement in space exploration and surveillance. Full article
(This article belongs to the Special Issue Technologies and Applications of Optical Imaging)
Show Figures

Figure 1

16 pages, 3004 KiB  
Article
Unveiling Species Diversity Within Early-Diverging Fungi from China VI: Four Absidia sp. nov. (Mucorales) in Guizhou and Hainan
by Yi-Xin Wang, Zi-Ying Ding, Xin-Yu Ji, Zhe Meng and Xiao-Yong Liu
Microorganisms 2025, 13(6), 1315; https://doi.org/10.3390/microorganisms13061315 - 5 Jun 2025
Cited by 1 | Viewed by 479
Abstract
Absidia is the most species-rich genus within the family Cunninghamellaceae, with its members commonly isolated from diverse substrates, particularly rhizosphere soil. In this study, four novel Absidia species, A. irregularis sp. nov., A. multiformis sp. nov., A. ovoidospora sp. nov., and A. verticilliformis [...] Read more.
Absidia is the most species-rich genus within the family Cunninghamellaceae, with its members commonly isolated from diverse substrates, particularly rhizosphere soil. In this study, four novel Absidia species, A. irregularis sp. nov., A. multiformis sp. nov., A. ovoidospora sp. nov., and A. verticilliformis sp. nov., were discovered from soil samples collected in southern and southwestern China, using integrated morphological and molecular analyses. Phylogenetic analyses based on concatenated ITS, SSU, LSU, Act, and TEF1α sequence data reconstructed trees that strongly supported the monophyly of each of these four new taxa. Key diagnostic features include A. irregularis (closely related to A. oblongispora) exhibiting irregular colony morphology, A. multiformis (sister to A. heterospora) demonstrating polymorphic sporangiospores, A. ovoidospora (forming a clade with A. panacisoli and A. abundans) producing distinctive ovoid sporangiospores, and A. verticilliformis (next to A. edaphica) displaying verticillately branched sporangiophores. Each novel species is formally described with comprehensive documentation, including morphological descriptions, illustrations, Fungal Names registration identifiers, designated type specimens, etymological explanations, maximum growth temperatures, and taxonomic comparisons. This work constitutes the sixth instalment in a series investigating early-diverging fungal diversity in China aiming to enhance our understanding of the diversity of fungi in tropical and subtropical ecosystems in Asia. In this paper, the known species of Absidia are expanded to 71. Full article
Show Figures

Figure 1

17 pages, 3256 KiB  
Article
Research on the Forming Detection Technology of Shell Plates Based on Laser Scanning
by Ji Wang, Baichen Wang, Yujun Liu, Rui Li, Shilin Huo, Jiawei Shi and Lin Pang
J. Mar. Sci. Eng. 2025, 13(6), 1057; https://doi.org/10.3390/jmse13061057 - 27 May 2025
Viewed by 351
Abstract
In order to solve the problems of low efficiency and insufficient accuracy of the traditional manual template method in the forming detection of shell plates, a digital solution based on laser scanning detection system was proposed. By introducing a six-degree-of-freedom robotic arm and [...] Read more.
In order to solve the problems of low efficiency and insufficient accuracy of the traditional manual template method in the forming detection of shell plates, a digital solution based on laser scanning detection system was proposed. By introducing a six-degree-of-freedom robotic arm and a high-precision line laser sensor to build a three-dimensional detection platform, a digital template method framework including data acquisition, point cloud registration, surface reconstruction, and deviation analysis was innovatively constructed. A point cloud non-penetration registration algorithm fused with boundary geometric information was proposed. Based on the improved Delaunay triangulation algorithm, the surface is reconstructed and the digital template is extracted. Experimental verification shows that the method achieves an accuracy of less than 1 mm of error in the detection of outer plates, shortens the single detection time to less than 10 min, and improves the detection efficiency by more than 75% compared with the traditional method. Full article
(This article belongs to the Section Ocean Engineering)
Show Figures

Figure 1

30 pages, 705 KiB  
Review
Mathematics and Machine Learning for Visual Computing in Medicine: Acquisition, Processing, Analysis, Visualization, and Interpretation of Visual Information
by Bin Li, Shixiang Feng, Jinhong Zhang, Guangbin Chen, Shiyang Huang, Sibei Li and Yuxin Zhang
Mathematics 2025, 13(11), 1723; https://doi.org/10.3390/math13111723 - 23 May 2025
Viewed by 394
Abstract
Visual computing in medicine involves handling the generation, acquisition, processing, analysis, exploration, visualization, and interpretation of medical visual information. Machine learning has become a prominent tool for data analytics and problem-solving, which is the process of enabling computers to automatically learn from data [...] Read more.
Visual computing in medicine involves handling the generation, acquisition, processing, analysis, exploration, visualization, and interpretation of medical visual information. Machine learning has become a prominent tool for data analytics and problem-solving, which is the process of enabling computers to automatically learn from data and obtain certain knowledge, patterns, or input–output relationships. The tasks involving visual computing in medicine often could be transformed into tasks of machine learning. In recent years, there has been a surge in research focusing on machine-learning-based visual computing. However, there are few reviews comprehensively introducing and surveying the systematic implementation of machine-learning-based vision computing in medicine, and in relevant reviews, little attention has been paid to the use of machine learning methods to transform medical visual computing tasks into data-driven learning problems with high-level feature representation, while exploring their effectiveness in key medical applications, such as image-guided surgery. This review paper addresses the above question and surveys fully and systematically the recent advancements, challenges, and future directions regarding machine-learning-based medical visual computing with high-level features. This paper is organized as follows. The fundamentals and paradigm of visual computing in medicine are first concisely introduced. Then, aspects of visual computing in medicine are delved into: (1) acquisition of visual information; (2) processing and analysis of visual information; (3) exploration and interpretation of visual information; and (4) image-guided surgery. In particular, this paper explores machine-learning-based methods and factors for visual computing tasks. Finally, the future prospects are discussed. In conclusion, this literature review on machine learning for visual computing in medicine showcases the diverse applications and advancements in this field. Full article
Show Figures

Figure 1

17 pages, 12183 KiB  
Article
Triplanar Point Cloud Reconstruction of Head Skin Surface from Computed Tomography Images in Markerless Image-Guided Surgery
by Jurica Cvetić, Bojan Šekoranja, Marko Švaco and Filip Šuligoj
Bioengineering 2025, 12(5), 498; https://doi.org/10.3390/bioengineering12050498 - 8 May 2025
Viewed by 626
Abstract
Accurate preoperative image processing in markerless image-guided surgeries is an important task. However, preoperative planning highly depends on the quality of medical imaging data. In this study, a novel algorithm for outer skin layer extraction from head computed tomography (CT) scans is presented [...] Read more.
Accurate preoperative image processing in markerless image-guided surgeries is an important task. However, preoperative planning highly depends on the quality of medical imaging data. In this study, a novel algorithm for outer skin layer extraction from head computed tomography (CT) scans is presented and evaluated. Axial, sagittal, and coronal slices are processed separately to generate spatial data. Each slice is binarized using manually defined Hounsfield unit (HU) range thresholding to create binary images from which valid contours are extracted. The individual points of each contour are then projected into three-dimensional (3D) space using slice spacing and origin information, resulting in uniplanar point clouds. These point clouds are then fused through geometric addition into a single enriched triplanar point cloud. A two-step downsampling process is applied, first at the uniplanar level and then after merging, using a voxel size of 1 mm. Across two independent datasets with a total of 83 individuals, the merged cloud approach yielded an average of 11.61% more unique points compared to the axial cloud. The validity of the triplanar point cloud reconstruction was confirmed by a root mean square (RMS) registration error of 0.848 ± 0.035 mm relative to the ground truth models. These results establish the proposed algorithm as robust and accurate across different CT scanners and acquisition parameters, supporting its potential integration into patient registration for markerless image-guided surgeries. Full article
(This article belongs to the Special Issue Advancements in Medical Imaging Technology)
Show Figures

Figure 1

26 pages, 9328 KiB  
Article
Global Optical and SAR Image Registration Method Based on Local Distortion Division
by Bangjie Li, Dongdong Guan, Yuzhen Xie, Xiaolong Zheng, Zhengsheng Chen, Lefei Pan, Weiheng Zhao and Deliang Xiang
Remote Sens. 2025, 17(9), 1642; https://doi.org/10.3390/rs17091642 - 6 May 2025
Viewed by 601
Abstract
Variations in terrain elevation cause images acquired under different imaging modalities to deviate from a linear mapping relationship. This effect is particularly pronounced between optical and SAR images, where the range-based imaging mechanism of SAR sensors leads to significant local geometric distortions, such [...] Read more.
Variations in terrain elevation cause images acquired under different imaging modalities to deviate from a linear mapping relationship. This effect is particularly pronounced between optical and SAR images, where the range-based imaging mechanism of SAR sensors leads to significant local geometric distortions, such as perspective shrinkage and occlusion. As a result, it becomes difficult to represent the spatial correspondence between optical and SAR images using a single geometric model. To address this challenge, we propose a global optical-SAR image registration method that leverages local distortion characteristics. Specifically, we introduce a Superpixel-based Local Distortion Division (SLDD) method, which defines superpixel region features and segments the image into local distortion and normal regions by computing the Mahalanobis distance between superpixel features. We further design a Multi-Feature Fusion Capsule Network (MFFCN) that integrates shallow salient features with deep structural details, reconstructing the dimensions of digital capsules to generate feature descriptors encompassing texture, phase, structure, and amplitude information. This design effectively mitigates the information loss and feature degradation problems caused by pooling operations in conventional convolutional neural networks (CNNs). Additionally, a hard negative mining loss is incorporated to further enhance feature discriminability. Feature descriptors are extracted separately from regions with different distortion levels, and corresponding transformation models are built for local registration. Finally, the local registration results are fused to generate a globally aligned image. Experimental results on public datasets demonstrate that the proposed method achieves superior performance over state-of-the-art (SOTA) approaches in terms of Root Mean Squared Error (RMSE), Correct Match Number (CMN), Distribution of Matched Points (Scat), Edge Fidelity (EF), and overall visual quality. Full article
(This article belongs to the Special Issue Temporal and Spatial Analysis of Multi-Source Remote Sensing Images)
Show Figures

Figure 1

Back to TopTop