Sign in to use this feature.

Years

Between: -

Subjects

remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline

Journals

Article Types

Countries / Regions

Search Results (159)

Search Parameters:
Keywords = deformable image registration

Order results
Result details
Results per page
Select all
Export citation of selected articles as:
16 pages, 15962 KB  
Article
SKUF Protocol: Slice, Keep, Unwrap, Fuse—A Pilot Multimodal Approach to Cardiac Innervation Mapping
by Igor Makarov, Olga Solovyova, Anna Starshinova, Dmitry Kudlay and Lubov Mitrofanova
Diagnostics 2026, 16(8), 1178; https://doi.org/10.3390/diagnostics16081178 - 16 Apr 2026
Viewed by 92
Abstract
Background/Objective: Cardiac innervation plays a critical role in regulating myocardial function and enabling the heart to adapt to physiological and pathological conditions. Although the general features of sympathetic and parasympathetic innervation of the myocardium are well described, the spatial organisation of [...] Read more.
Background/Objective: Cardiac innervation plays a critical role in regulating myocardial function and enabling the heart to adapt to physiological and pathological conditions. Although the general features of sympathetic and parasympathetic innervation of the myocardium are well described, the spatial organisation of nerve fibres within the cardiac muscle remains incompletely characterised. This study aimed to develop and validate the SKUF (Slice–Keep–Unwrap–Fuse) protocol, a multimodal framework for mapping myocardial innervation through the integration of histological data and magnetic resonance imaging (MRI). Methods: The study was performed on the heart of a 7-year-old patient who died from rupture of a cerebral vascular malformation without evidence of cardiovascular disease. Prior to histological processing, post-mortem MRI was performed to provide a precise anatomical reference. The heart was sectioned into sequential transverse rings of 4 mm thickness, yielding 71 paraffin blocks. Histological sections (3 μm) were immunostained with antibodies against UCHL-1 to visualise nerve fibres and scanned using an Aperio AT2 system (20× magnification). Automated image analysis was conducted using the SVSSlide Processor module, which included tissue segmentation, colour-based nerve fibre detection, and sliding-window density mapping. Heatmaps were assembled into ring-based myocardial reconstructions and co-registered with MRI slices using combined rigid and deformable registration, followed by three-dimensional reconstruction of innervation patterns. Results: A higher density of nerve fibres was observed in the right ventricular myocardium compared with the left ventricle, whereas larger nerve trunks were identified in the epicardium of the left ventricle. Quantitative analysis revealed a pronounced longitudinal gradient of innervation, with minimal density in the apical region and progressive increases towards the mid-ventricular segments, where maximal density and spatial organisation of neural structures were observed. The atrioventricular groove exhibited the greatest heterogeneity of innervation due to the presence of large nerve trunks and ganglionated plexuses. Integration of histological maps with MRI enabled three-dimensional visualisation of spatial clusters of nerve fibres. Conclusions: The SKUF protocol provides a robust framework for integrating histological and MRI data to generate three-dimensional maps of myocardial innervation. This approach may facilitate the development of high-resolution anatomical atlases of cardiac innervation and support future studies of neurocardiac mechanisms of arrhythmogenesis and targeted neuromodulation. Full article
(This article belongs to the Special Issue Advances in Cardiovascular Diseases: Diagnosis and Management)
Show Figures

Figure 1

20 pages, 3700 KB  
Article
Infrared Small Target Detection Method Fusing Accurate Registration and Weighted Difference
by Quan Liang, Teng Wang, Kefang Wang, Lixing Zhao, Xiaoyan Li and Fansheng Chen
Sensors 2026, 26(8), 2406; https://doi.org/10.3390/s26082406 - 14 Apr 2026
Viewed by 210
Abstract
Low-orbit thermal infrared bidirectional whisk-broom imaging offers wide-swath coverage and high spatial resolution for monitoring moving targets such as aircraft, but large scan angles and terrain undulation cause non-rigid geometric distortion and radiometric inconsistency between forward and backward scans. These effects generate strong [...] Read more.
Low-orbit thermal infrared bidirectional whisk-broom imaging offers wide-swath coverage and high spatial resolution for monitoring moving targets such as aircraft, but large scan angles and terrain undulation cause non-rigid geometric distortion and radiometric inconsistency between forward and backward scans. These effects generate strong clutter in difference images and degrade small and weak target detection. To address this problem, we propose an infrared small target detection method that fuses accurate registration and weighted difference. First, we propose a hybrid multi-scale registration algorithm that achieves coarse affine registration through sparse feature–point matching and then iteratively corrects nonlinear deformations by integrating a global grayscale-driven force with a local sparse-feature-guided force, yielding a registration error of 0.3281 pixels. On this basis, a multi-scale weighted convolutional morphological difference algorithm is proposed. A novel dual-structure hollow top-hat transform is constructed to accurately estimate the background, and a multi-directional convolution mechanism is introduced to effectively suppress anisotropic edge clutter and enhance target saliency. Experiments on SDGSAT-1 thermal infrared bidirectional whisk-broom data show an SCRG of 18.27, and a detection rate of 91.2% when the false alarm rate is below 0.15%. The method outperforms representative competing algorithms and provides a useful reference for space-based aerial moving target detection. Full article
17 pages, 1639 KB  
Article
Cascade Registration and Fusion for Unaligned Infrared and Visible Images in Autonomous Driving
by Long Xiao, Yidong Xie and Chengda Yao
Electronics 2026, 15(7), 1427; https://doi.org/10.3390/electronics15071427 - 30 Mar 2026
Viewed by 283
Abstract
Infrared and visible image fusion is a critical technology for enhancing the all-weather perception capabilities of autonomous driving systems. However, the inherent physical parallax of vehicle-mounted sensors combined with motion-induced vibrations makes it difficult to achieve strict alignment between the source images. Direct [...] Read more.
Infrared and visible image fusion is a critical technology for enhancing the all-weather perception capabilities of autonomous driving systems. However, the inherent physical parallax of vehicle-mounted sensors combined with motion-induced vibrations makes it difficult to achieve strict alignment between the source images. Direct fusion of such misaligned pairs leads to ghosting artifacts, which significantly compromises driving safety. To address this challenge, this paper proposes a cascaded deep fusion framework tailored for autonomous driving scenarios. A dual-modal perception dataset is first constructed, incorporating realistic physical parallax and non-rigid deformations. Subsequently, a decoupled strategy is established, characterized by geometric correction followed by semantic fusion: the Static-Feature Recursive Registration (SFRR) network is utilized to explicitly correct the spatial misalignments caused by parallax, thereby establishing geometric consistency; then, the Hierarchical Invertible Block Fusion (HIBF) network achieves lossless integration of cross-modal features by combining spatial frequency separation with invertible interaction techniques. Experimental results demonstrate that the proposed method outperforms representative algorithms across several metrics, including Mutual Information (MI), Visual Information Fidelity (VIF), Structural Similarity (SSIM), and Correlation Coefficient (CC), producing high-quality fused images with clear structural definitions. Full article
Show Figures

Figure 1

30 pages, 1965 KB  
Article
Joint Denoising and Motion-Correction for Low-Dose CT Myocardial Perfusion Imaging Using Deep Learning
by Mahmud Hasan, Aaron So and Mahmoud R. El-Sakka
Electronics 2026, 15(6), 1286; https://doi.org/10.3390/electronics15061286 - 19 Mar 2026
Viewed by 358
Abstract
Computed Tomography (CT) is a widely used imaging modality that employs X-rays and computational reconstruction to visualize internal anatomy. Although higher radiation doses produce higher-quality images, they also increase long-term cancer risk, motivating the use of low-dose protocols. However, low-dose CT data inherently [...] Read more.
Computed Tomography (CT) is a widely used imaging modality that employs X-rays and computational reconstruction to visualize internal anatomy. Although higher radiation doses produce higher-quality images, they also increase long-term cancer risk, motivating the use of low-dose protocols. However, low-dose CT data inherently suffer from elevated Poisson–Gaussian noise, necessitating effective denoising strategies. In myocardial CT perfusion (CTP) imaging, this challenge is compounded by residual cardiac motion, which misaligns consecutive time points and impairs accurate estimation of perfusion maps for diagnosing coronary artery disease. Traditional approaches typically treat these two problems, noise and motion, separately, denoising the reconstructed images first or applying the registration first. Such serial pipelines often degrade clinically significant features; e.g., denoising may destroy structural details essential for registration, while motion correction can distort subtle intensity cues needed for noise modelling. To overcome these limitations, we propose a unified deep learning framework that performs noise suppression and motion correction jointly for low-dose myocardial CTP. The method integrates two complementary components through a parallel ensemble strategy: (i) a modified Fast and Flexible Denoising Network (FFDNet) that incorporates noise-level maps to mitigate blended noise effectively, and (ii) a CNN-based registration model, extended with Time Enhancement Curve (TEC) correction and 4D physiological consistency constraints to estimate temporally coherent and anatomically plausible motion fields. By combining their outputs without iterative dependencies, the proposed framework produces motion-corrected and denoised CTP sequences in a single unified processing step, thereby better preserving myocardial structure and perfusion dynamics than conventional serial pipelines. The model has been evaluated using both reference-based (MSE, PSNR, SSIM, PCC, Noise Variance, TRE) and no-reference (NIQE, FID, KID, AUC) image quality metrics, supplemented by expert human assessment. Results demonstrate that jointly learning noise characteristics and motion patterns enables restoration of low-dose CTP images while minimizing feature corruption, thereby advancing the clinical utility of low-dose myocardial CTP imaging. Full article
Show Figures

Figure 1

21 pages, 2878 KB  
Article
NMLoNet: An End-to-End Intelligent Vehicle Localization Network Using Navigation Maps
by Qingtong Yuan and Yicheng Li
World Electr. Veh. J. 2026, 17(3), 150; https://doi.org/10.3390/wevj17030150 - 17 Mar 2026
Viewed by 279
Abstract
Accurate and reliable localization is crucial for advanced autonomous driving systems. Traditional high-precision localization approaches rely on meticulously annotated high-definition (HD) maps and employ visual-geometric methods to derive accurate pose information. However, the construction, maintenance, and updating of HD maps are costly and [...] Read more.
Accurate and reliable localization is crucial for advanced autonomous driving systems. Traditional high-precision localization approaches rely on meticulously annotated high-definition (HD) maps and employ visual-geometric methods to derive accurate pose information. However, the construction, maintenance, and updating of HD maps are costly and time-consuming. In contrast, localization using publicly available navigation maps provides a low-cost and scalable alternative. Existing methods typically align BEV (Bird’s-Eye-View) features extracted from surround-view images with navigation maps to obtain localization results. Although such approaches can achieve high accuracy, they often neglect the inherent modality gap between BEV features and navigation maps, leading to localization errors. To address this issue, we propose NMLoNet: An End-to-End Intelligent Vehicle Localization Network Using Navigation Maps. The proposed method exploits road semantic elements to effectively bridge the modality gap between BEV representations and navigation maps. Specifically, a Deformable Attention Module is introduced after BEV feature extraction to capture long-range dependencies among BEV features. Furthermore, we innovatively incorporate vector map constraints to minimize the discrepancy between BEV and navigation map features. In addition, a multi-level cross-modal feature registration mechanism is designed to achieve more precise alignment between BEV and map representations. Extensive experiments on the nuScenes and Argoverse datasets demonstrate that NMLoNet achieves state-of-the-art performance, improving localization accuracy by approximately 11% under monocular settings and 24% under surround-view configurations. Moreover, the proposed network maintains robust localization performance in complex and highly dynamic driving environments. Full article
(This article belongs to the Section Automated and Connected Vehicles)
Show Figures

Figure 1

30 pages, 29830 KB  
Article
From Hematoxylin and Eosin to Masson’s Trichrome: A Comprehensive Framework for Virtual Stain Transformation in Chronic Liver Disease Diagnosis
by Hossam Magdy Balaha, Khadiga M. Ali, Ali Mahmoud, Ahmed Aboudessouki, Mohamed T. Azam, Guruprasad A. Giridharan, Dibson Gondim and Ayman El-Baz
Diagnostics 2026, 16(5), 764; https://doi.org/10.3390/diagnostics16050764 - 4 Mar 2026
Viewed by 620
Abstract
Background/Objectives: Virtual histological staining offers a rapid, cost-effective alternative to physical reprocessing but faces challenges related to spatial misalignment and staining heterogeneity between Hematoxylin and Eosin (H&E) and Masson’s Trichrome (MT) domains. This study develops a robust framework for H&E-to-MT virtual staining [...] Read more.
Background/Objectives: Virtual histological staining offers a rapid, cost-effective alternative to physical reprocessing but faces challenges related to spatial misalignment and staining heterogeneity between Hematoxylin and Eosin (H&E) and Masson’s Trichrome (MT) domains. This study develops a robust framework for H&E-to-MT virtual staining to enable accurate fibrosis assessment without additional tissue consumption. Methods: We propose a transformer-based generative adversarial network (TbGAN) supported by a multi-stage alignment pipeline (SIFT (scale-invariant feature transform) coarse alignment, ORB/homography patch registration, and B-spline free-form deformation) and a weighted fusion mechanism combining four configuration outputs (O/10/3, O/3/10, R/10/3, and R/3/10). The framework was validated on 27 whole-slide images (>100,000 aligned patches) through 24 independent experiments. Results: The fused approach achieved state-of-the-art performance: MI = 0.9815 ± 0.0934, SSIM = 0.7474 ± 0.0597, NCC = 0.9320 ± 0.0220, and CS = 0.9946 ± 0.0014. Statistical analysis confirmed enhanced stability through narrower interquartile ranges, fewer outliers, and tighter 95% confidence intervals compared to individual configurations. Qualitative assessment demonstrated preserved collagen morphology critical for fibrosis staging. Conclusions: Our framework provides a reliable, IRB-compliant solution for virtual MT staining that maintains high structural fidelity suitable for diagnostic support. It enables resource-efficient fibrosis quantification and supports integration into clinical digital pathology workflows without patient-specific recalibration. Full article
Show Figures

Figure 1

28 pages, 11762 KB  
Article
A Coarse-to-Fine Optical-SAR Image Registration Algorithm for UAV-Based Multi-Sensor Systems Using Geographic Information Constraints and Cross-Modal Feature Consistency Mapping
by Xiaoyong Sun, Zhen Zuo, Xiaojun Guo, Xuan Li, Peida Zhou, Runze Guo and Shaojing Su
Remote Sens. 2026, 18(5), 683; https://doi.org/10.3390/rs18050683 - 25 Feb 2026
Viewed by 407
Abstract
Optical and synthetic aperture radar (SAR) image registration faces challenges from nonlinear radiometric distortions and geometric deformations caused by different imaging mechanisms. This paper proposes a coarse-to-fine registration algorithm integrating geographic information constraints with cross-modal feature consistency mapping. The coarse stage employs imaging [...] Read more.
Optical and synthetic aperture radar (SAR) image registration faces challenges from nonlinear radiometric distortions and geometric deformations caused by different imaging mechanisms. This paper proposes a coarse-to-fine registration algorithm integrating geographic information constraints with cross-modal feature consistency mapping. The coarse stage employs imaging geometry-based coordinate transformation with airborne navigation data to eliminate scale and rotation differences. The fine stage constructs a multi-scale phase congruency-based feature response aggregation model combined with rotation-invariant descriptors and global-to-local search for sub-pixel alignment. Experiments on integrated airborne optical/SAR datasets demonstrate superior performance with an average RMSE of 2.00 pixels, outperforming both traditional handcrafted methods (3MRS, OS-SIFT, POS-GIFT, GLS-MIFT) and state-of-the-art deep learning approaches (SuperGlue, LoFTR, ReDFeat, SAROptNet) while reducing execution time by 37.0% compared with the best-performing baseline. The proposed coarse registration also serves as an effective preprocessing module that improves SuperGlue’s matching rate by 167% and LoFTR’s by 109%, with a hybrid refinement strategy achieving 1.95 pixels RMSE. The method demonstrates robust performance under challenging conditions, enabling real-time UAV-based multi-sensor fusion applications. Full article
Show Figures

Figure 1

22 pages, 4598 KB  
Article
Deep Learning Based Correction Algorithms for 3D Medical Reconstruction in Computed Tomography and Macroscopic Imaging
by Tomasz Les, Tomasz Markiewicz, Malgorzata Lorent, Miroslaw Dziekiewicz and Krzysztof Siwek
Appl. Sci. 2026, 16(4), 1954; https://doi.org/10.3390/app16041954 - 15 Feb 2026
Viewed by 494
Abstract
This paper introduces a hybrid two-stage registration framework for reconstructing three-dimensional (3D) kidney anatomy from macroscopic slices, using CT-derived models as the geometric reference standard. The approach addresses the data-scarcity and high-distortion challenges typical of macroscopic imaging, where fully learning-based registration (e.g., VoxelMorph) [...] Read more.
This paper introduces a hybrid two-stage registration framework for reconstructing three-dimensional (3D) kidney anatomy from macroscopic slices, using CT-derived models as the geometric reference standard. The approach addresses the data-scarcity and high-distortion challenges typical of macroscopic imaging, where fully learning-based registration (e.g., VoxelMorph) often fails to generalize due to limited training diversity and large nonrigid deformations that exceed the capture range of unconstrained convolutional filters. In the proposed pipeline, the Optimal Cross-section Matching (OCM) algorithm first performs constrained global alignment—translation, rotation, and uniform scaling—to establish anatomically consistent slice initialization. Next, a lightweight deep-learning refinement network, inspired by VoxelMorph, predicts residual local deformations between consecutive slices. The core novelty of this architecture lies in its hierarchical decomposition of the registration manifold: the OCM acts as a deterministic geometric anchor that neutralizes high-amplitude variance, thereby constraining the learning task to a low-dimensional residual manifold. This hybrid OCM + DL design integrates explicit geometric priors with the flexible learning capacity of neural networks, ensuring stable optimization and plausible deformation fields even with few training examples. Experiments on an original dataset of 40 kidneys demonstrated that the OCM + DL method achieved the highest registration accuracy across all evaluated metrics: NCC = 0.91, SSIM = 0.81, Dice = 0.90, IoU = 0.81, HD95 = 1.9 mm, and volumetric agreement DCVol = 0.89. Compared to single-stage baselines, this represents an average improvement of approximately 17% over DL-only and 14% over OCM-only, validating the synergistic contribution of the proposed hybrid strategy over standalone iterative or data-driven methods. The pipeline maintains physical calibration via Hough-based grid detection and employs Bézier-based contour smoothing for robust meshing and volume estimation. Although validated on kidney data, the proposed framework generalizes to other soft-tissue organs reconstructed from optical or photographic cross-sections. By decoupling interpretable global optimization from data-efficient deep refinement, the method advances the precision, reproducibility, and anatomical realism of multimodal 3D reconstructions for surgical planning, morphological assessment, and medical education. Full article
(This article belongs to the Special Issue Engineering Applications of Hybrid Artificial Intelligence Tools)
Show Figures

Figure 1

17 pages, 1423 KB  
Article
Residual Motion Correction in Low-Dose Myocardial CT Perfusion Using CNN-Based Deformable Registration
by Mahmud Hasan, Aaron So and Mahmoud R. El-Sakka
Electronics 2026, 15(2), 450; https://doi.org/10.3390/electronics15020450 - 20 Jan 2026
Cited by 1 | Viewed by 410
Abstract
Dynamic myocardial CT perfusion imaging enables functional assessment of coronary artery stenosis and myocardial microvascular disease. However, it is susceptible to residual motion artifacts arising from cardiac and respiratory activity. These artifacts introduce temporal misalignments, distorting Time-Enhancement Curves (TECs) and leading to inaccurate [...] Read more.
Dynamic myocardial CT perfusion imaging enables functional assessment of coronary artery stenosis and myocardial microvascular disease. However, it is susceptible to residual motion artifacts arising from cardiac and respiratory activity. These artifacts introduce temporal misalignments, distorting Time-Enhancement Curves (TECs) and leading to inaccurate myocardial perfusion measurements. Traditional nonrigid registration methods can address such motion but are often computationally expensive and less effective when applied to low-dose images, which are prone to increased noise and structural degradation. In this work, we present a CNN-based motion-correction framework specifically trained for low-dose cardiac CT perfusion imaging. The model leverages spatiotemporal patterns to estimate and correct residual motion between time frames, aligning anatomical structures while preserving dynamic contrast behaviour. Unlike conventional methods, our approach avoids iterative optimization and manually defined similarity metrics, enabling faster, more robust corrections. Quantitative evaluation demonstrates significant improvements in temporal alignment, with reduced Target Registration Error (TRE) and increased correlation between voxel-wise TECs and reference curves. These enhancements enable more accurate myocardial perfusion measurements. Noise from low-dose scans affects registration performance, but this remains an open challenge. This work emphasizes the potential of learning-based methods to perform effective residual motion correction under challenging acquisition conditions, thereby improving the reliability of myocardial perfusion assessment. Full article
Show Figures

Figure 1

16 pages, 321 KB  
Systematic Review
Quantifying In Vivo Arterial Deformation from CT and MRI: A Systematic Review of Segmentation, Motion Tracking, and Kinematic Metrics
by Rodrigo Valente, Bernardo Henriques, André Mourato, José Xavier, Moisés Brito, Stéphane Avril, António Tomás and José Fragata
Bioengineering 2026, 13(1), 121; https://doi.org/10.3390/bioengineering13010121 - 20 Jan 2026
Viewed by 562
Abstract
This article presents a systematic review on methods for quantifying three-dimensional, time-resolved (3D+t) deformation and motion of human arteries from Computed Tomography (CT) and Magnetic Resonance Imaging (MRI). Following Preferred Reporting Items for Systematic Reviews and Meta-Analyses (PRISMA) guidelines, we searched Scopus, Web [...] Read more.
This article presents a systematic review on methods for quantifying three-dimensional, time-resolved (3D+t) deformation and motion of human arteries from Computed Tomography (CT) and Magnetic Resonance Imaging (MRI). Following Preferred Reporting Items for Systematic Reviews and Meta-Analyses (PRISMA) guidelines, we searched Scopus, Web of Science, IEEE Xplore, Google Scholar, and PubMed on 19 December 2025 for in vivo, patient-specific CT or MRI studies reporting motion or deformation of large human arteries. We included studies that quantified arterial deformation or motion tracking and excluded non-vascular tissues, in vitro or purely computational work. Thirty-five studies were included in the qualitative synthesis; most were small, single-centre observational cohorts. Articles were analysed qualitatively, and results were synthesised narratively. Across the 35 studies, the most common segmentation approaches are active contours and threshold, while temporal motion is tracked using either voxel registration or surface methods. These kinematic data are used to compute metrics such as circumferential and longitudinal strain, distensibility, and curvature. Several studies also employ inverse methods to estimate wall stiffness. The findings consistently show that arterial strain decreases with age (on the order of 20% per decade in some cases) and in the presence of disease, that stiffness correlates with geometric remodelling, and that deformation is spatially heterogeneous. However, insufficient data prevents meaningful comparison across methods. Full article
Show Figures

Figure 1

16 pages, 1633 KB  
Review
A Review on Registration Techniques for Cardiac Computed Tomography and Ultrasound Images
by Zongyang Li, Huijing He, Qi Wang, Luyu Li, Hongjian Gao and Jiehui Li
Bioengineering 2025, 12(12), 1351; https://doi.org/10.3390/bioengineering12121351 - 11 Dec 2025
Viewed by 910
Abstract
With the rapid development of medical imaging technology, the early diagnosis and treatment of heart disease have been significantly improved. Cardiac CT (Computed Tomography) and ultrasound images are often used in combination to provide more comprehensive information on cardiac structure and function due [...] Read more.
With the rapid development of medical imaging technology, the early diagnosis and treatment of heart disease have been significantly improved. Cardiac CT (Computed Tomography) and ultrasound images are often used in combination to provide more comprehensive information on cardiac structure and function due to their respective advantages and limitations. However, due to the significant differences in imaging principles, resolutions, and viewing angles between these two imaging modalities, how to effectively register cardiac CT and ultrasound images has become an important research topic in imaging and clinical applications. This article summarizes the research progress of cardiac CT and ultrasound image registration, and analyzes the existing registration methods and their advantages and disadvantages. Firstly, this article summarizes traditional registration methods based on image intensity, feature points, and regions, and explores the application of rigid and non-rigid registration algorithms. Secondly, in view of common challenges in cardiac CT and ultrasound image registration, such as image noise, deformation, and differences in imaging time, this article discusses the recent advances in multimodal registration technology in cardiac imaging and forecasts the potential of deep learning methods in registration. In addition, this article also evaluates the application effects and limitations of these methods in clinical practice, and finally looks forward to the future development direction of cardiac image registration technology, especially its potential applications in personalized medicine and real-time monitoring. Through a comprehensive review of the current research status of cardiac CT and ultrasound image registration, this article provides a systematic theoretical framework for researchers in related fields and provides a reference for future technological breakthroughs and clinical translation. Full article
(This article belongs to the Section Biosignal Processing)
Show Figures

Figure 1

28 pages, 4896 KB  
Article
Development and Validation of an Openable Spherical Target System for High-Precision Registration and Georeferencing of Terrestrial Laser Scanning Point Clouds
by Maria Makuch and Pelagia Gawronek
Sensors 2025, 25(24), 7512; https://doi.org/10.3390/s25247512 - 10 Dec 2025
Viewed by 808
Abstract
Terrestrial laser scanning (TLS) point clouds require high-precision registration and georeferencing to be used effectively. Only then can data from multiple stations be integrated and transformed from the instrument’s local coordinate system into a common, stable reference frame that ensures temporal consistency for [...] Read more.
Terrestrial laser scanning (TLS) point clouds require high-precision registration and georeferencing to be used effectively. Only then can data from multiple stations be integrated and transformed from the instrument’s local coordinate system into a common, stable reference frame that ensures temporal consistency for further analyses of displacement and deformation. The article demonstrates the validation of an innovative referencing system devised to improve the reliability and accuracy of registering and georeferencing TLS point clouds. The primary component of the system is openable reference spheres, whose centroids can be directly and precisely determined using surveying methods. It also includes dedicated adapters: tripods and adjustable F-clamps with which the spheres can be securely mounted on various structural components, facilitating the optimal distribution of the reference markers. Laboratory tests with four modern laser scanners (Z+F Imager 5010C, Riegl VZ-400, Leica ScanStation P40, and Trimble TX8) revealed sub-millimetre accuracy of sphere fit and form errors, along with the sphere distance error within the acceptance threshold. This confirms that there are no significant systematic errors and that the system is fully compatible with various TLS technologies. The registration and georeferencing quality parameters demonstrate the system’s stability and repeatability. They were additionally verified with independent control points and geodetic levelling of the centres of the spheres. The system overcomes the critical limitations of traditional reference spheres because their centres can be measured directly using surveying methods. This facilitates registration and georeferencing accuracy on par with, or even better than, that of commercial targets. The proposed system serves as a stable and repeatable reference frame suitable for high-precision engineering applications, deformation monitoring, and longitudinal analyses. Full article
(This article belongs to the Section Remote Sensors)
Show Figures

Figure 1

31 pages, 5390 KB  
Article
Artificial Intelligence-Driven Mobile Platform for Thermographic Imaging to Support Maternal Health Care
by Lucas Miguel Iturriago-Salas, Jeison Andres Mesa-Sarmiento, Paola Alexandra Castro-Cabrera, Andrés Marino Álvarez-Meza and German Castellanos-Dominguez
Computers 2025, 14(11), 466; https://doi.org/10.3390/computers14110466 - 1 Nov 2025
Cited by 1 | Viewed by 1313
Abstract
Maternal health care during labor requires the continuous and reliable monitoring of analgesic procedures, yet conventional systems are often subjective, indirect, and operator-dependent. Infrared thermography (IRT) offers a promising non-invasive approach for labor epidural analgesia (LEA) monitoring, but its practical implementation is hindered [...] Read more.
Maternal health care during labor requires the continuous and reliable monitoring of analgesic procedures, yet conventional systems are often subjective, indirect, and operator-dependent. Infrared thermography (IRT) offers a promising non-invasive approach for labor epidural analgesia (LEA) monitoring, but its practical implementation is hindered by clinical and hardware limitations. This work presents a novel artificial intelligence-driven mobile platform to overcome these hurdles. The proposed solution integrates a lightweight deep learning model for semantic segmentation, a B-spline-based free-form deformation (FFD) approach for non-rigid dermatome registration, and efficient on-device inference. Our analysis identified a U-Net with a MobileNetV3 backbone as the optimal architecture, achieving a high Dice score of 0.97 and a 4.5% intersection over union (IoU) gain over heavier backbones while being 73% more parameter-efficient. The entire AI pipeline is deployed on a commercial smartphone via TensorFlow Lite, achieving an on-device inference time of approximately two seconds per image. Deployed within a user-friendly interface, our approach provides straightforward feedback to support decision making in labor management. By integrating thermal imaging with deep learning and mobile deployment, the proposed system provides a practical solution to enhance maternal care. By offering a quantitative, automated tool, this work demonstrates a viable pathway to augment or replace subjective clinical assessments with objective, data-driven monitoring, bridging the gap between advanced AI research and point-of-care practice in obstetric anesthesia. Full article
(This article belongs to the Special Issue Machine Learning: Innovation, Implementation, and Impact)
Show Figures

Figure 1

22 pages, 6682 KB  
Article
Multimodal Fire Salient Object Detection for Unregistered Data in Real-World Scenarios
by Ning Sun, Jianmeng Zhou, Kai Hu, Chen Wei, Zihao Wang and Lipeng Song
Fire 2025, 8(11), 415; https://doi.org/10.3390/fire8110415 - 26 Oct 2025
Viewed by 1828
Abstract
In real-world fire scenarios, complex lighting conditions and smoke interference significantly challenge the accuracy and robustness of traditional fire detection systems. Fusion of complementary modalities, such as visible light (RGB) and infrared (IR), is essential to enhance detection robustness. However, spatial shifts and [...] Read more.
In real-world fire scenarios, complex lighting conditions and smoke interference significantly challenge the accuracy and robustness of traditional fire detection systems. Fusion of complementary modalities, such as visible light (RGB) and infrared (IR), is essential to enhance detection robustness. However, spatial shifts and geometric distortions occur in multi-modal image pairs collected by multi-source sensors due to installation deviations and inconsistent intrinsic parameters. Existing multi-modal fire detection frameworks typically depend on pre-registered data, which struggles to handle modal misalignment in practical deployment. To overcome this limitation, we propose an end-to-end multi-modal Fire Salient Object Detection framework capable of dynamically fusing cross-modal features without pre-registration. Specifically, the Channel Cross-enhancement Module (CCM) facilitates semantic interaction across modalities in salient regions, suppressing noise from spatial misalignment. The Deformable Alignment Module (DAM) achieves adaptive correction of geometric deviations through cascaded deformation compensation and dynamic offset learning. For validation, we constructed an unregistered indoor fire dataset (Indoor-Fire) covering common fire scenarios. Generalizability was further evaluated on an outdoor dataset (RGB-T Wildfire). To fully validate the effectiveness of the method in complex building fire scenarios, we conducted experiments using the Fire in historic buildings (Fire in historic buildings) dataset. Experimental results demonstrate that the F1-score reaches 83% on both datasets, with the IoU maintained above 70%. Notably, while maintaining high accuracy, the number of parameters (91.91 M) is only 28.1% of the second-best SACNet (327 M). This method provides a robust solution for unaligned or weakly aligned modal fusion caused by sensor differences and is highly suitable for deployment in intelligent firefighting systems. Full article
Show Figures

Figure 1

15 pages, 2039 KB  
Article
Optimising Multimodal Image Registration Techniques: A Comprehensive Study of Non-Rigid and Affine Methods for PET/CT Integration
by Babar Ali, Mansour M. Alqahtani, Essam M. Alkhybari, Ali H. D. Alshehri, Mohammad Sayed and Tamoor Ali
Diagnostics 2025, 15(19), 2484; https://doi.org/10.3390/diagnostics15192484 - 28 Sep 2025
Cited by 1 | Viewed by 1752
Abstract
Background/Objective: Multimodal image registration plays a critical role in modern medical imaging, enabling the integration of complementary modalities such as positron emission tomography (PET) and computed tomography (CT). This study compares the performance of three widely used image registration techniques—Demons Image Registration [...] Read more.
Background/Objective: Multimodal image registration plays a critical role in modern medical imaging, enabling the integration of complementary modalities such as positron emission tomography (PET) and computed tomography (CT). This study compares the performance of three widely used image registration techniques—Demons Image Registration with Modality Transformation, Free-Form Deformation using the Medical Image Registration Toolbox (MIRT), and MATLAB Intensity-Based Registration—in terms of improving PET/CT image alignment. Methods: A total of 100 matched PET/CT image slices from a clinical scanner were analysed. Preprocessing techniques, including histogram equalisation and contrast enhancement (via imadjust and adapthisteq), were applied to minimise intensity discrepancies. Each registration method was evaluated under varying parameter conditions with regard to sigma fluid (range 4–8), histogram bins (100 to 256), and interpolation methods (linear and cubic). Performance was assessed using quantitative metrics: root mean square error (RMSE), mean squared error (MSE), mean absolute error (MAE), the Pearson correlation coefficient (PCC), and standard deviation (STD). Results: Demons registration achieved optimal performance at a sigma fluid value of 6, with an RMSE of 0.1529, and demonstrated superior computational efficiency. The MIRT showed better adaptability to complex anatomical deformations, with an RMSE of 0.1725. MATLAB Intensity-Based Registration, when combined with contrast enhancement, yielded the highest accuracy (RMSE = 0.1317 at alpha = 6). Preprocessing improved registration accuracy, reducing the RMSE by up to 16%. Conclusions: Each registration technique has distinct advantages: the Demons algorithm is ideal for time-sensitive tasks, the MIRT is suited to precision-driven applications, and MATLAB-based methods offer flexible processing for large datasets. This study provides a foundational framework for optimising PET/CT image registration in both research and clinical environments. Full article
(This article belongs to the Special Issue Diagnostics in Oncology Research)
Show Figures

Figure 1

Back to TopTop