Sign in to use this feature.

Years

Between: -

Subjects

remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline

Journals

remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline

Article Types

Countries / Regions

remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline

Search Results (16,336)

Search Parameters:
Keywords = 3D image

Order results
Result details
Results per page
Select all
Export citation of selected articles as:
15 pages, 1610 KB  
Article
In Silico Investigation of an Innovative Cone-Beam CT Configuration for Quantitative Imaging
by Antonio Sarno, Ivan Veronese, Paolo Mauriello, Immacolata Vanore, Antonio Minopoli, Carlos Maximiliano Mollo, Silvio Pardi, Gianfranco Paternò, Mariagabriella Pugliese, Riccardo de Asmundis and Paolo Cardarelli
Appl. Sci. 2026, 16(3), 1404; https://doi.org/10.3390/app16031404 (registering DOI) - 29 Jan 2026
Abstract
Quantitative evaluations in 3D images acquired via Cone-Beam Computed Tomography (CBCT) are limited by the scatter abundance and cone-beam artifacts. This work investigates benefits in using an innovative scanning geometry in CBCT (eCT), which replaces each projection of the conventional scanning protocol with [...] Read more.
Quantitative evaluations in 3D images acquired via Cone-Beam Computed Tomography (CBCT) are limited by the scatter abundance and cone-beam artifacts. This work investigates benefits in using an innovative scanning geometry in CBCT (eCT), which replaces each projection of the conventional scanning protocol with a series of collimated projections (Np) acquired over an oscillating trajectory, realized either with an oscillating source or a multi-spot array. In silico tests employed a cylindrical water phantom embodying inserts of four biological materials. 1 mm-thick bone slabs were sandwiched between 9 mm water slabs to evaluate the image conspicuity. eCT improved the Hounsfield Unit (HU) accuracy, with a direct relation with Np. eCT with Np = 10 reduced the bias of the estimated HU more than two times when compared to CBCT. Increasing the Np presented a large impact on the image conspicuity for portions of the FOV distant from the central axial plane, with the signal-to-noise ratio between water and bone slabs increasing by a factor of 18 for Np = 10 compared to CBCT. The proposed eCT configuration is expected to be adopted in applications without strict demand for scanning time and projection number, such as dentomaxillofacial and intrasurgical imaging, imaging of the extremities, and image-guided radiotherapy. Full article
14 pages, 2934 KB  
Article
Toward Wide-Field, Extended-Range 3D Vision: A Biomimetic Curved Compound-Eye Imaging System
by Songchang Zhang, Xibin Zhang, Yingsong Zhao, Xiangbo Ren, Weixing Yu and Huangrong Xu
Sensors 2026, 26(3), 901; https://doi.org/10.3390/s26030901 - 29 Jan 2026
Abstract
This work presents a biomimetic curved compound-eye imaging system (BCCEIS) engineered for extended-range depth mapping. The system is designed to emulate an apposition-type compound eye and comprises three key components: a hemispherical array of lenslets forming a curved multi-aperture imaging surface, an optical [...] Read more.
This work presents a biomimetic curved compound-eye imaging system (BCCEIS) engineered for extended-range depth mapping. The system is designed to emulate an apposition-type compound eye and comprises three key components: a hemispherical array of lenslets forming a curved multi-aperture imaging surface, an optical relay subsystem that transforms the curved focal plane into a flat image plane compatible with a commercial CMOS sensor, and a high-resolution CMOS detector. Comprehensive optical analysis confirms effective aberration correction, with the root-mean-square (RMS) spot radii across the field of view (FOV) remaining smaller than the radius of the Airy disk. The fabricated prototype achieves an angular resolution of 2.5 mrad within an ultra-wide 97.4° FOV. Furthermore, the system demonstrates accurate depth reconstruction within the entire FOV at distances up to approximately 2 m, exhibiting errors below 2%. Owing to its compact form, wide FOV, and robust depth-sensing performance, the BCCEIS shows strong potential as a payload for unmanned aerial vehicles in applications such as security surveillance and obstacle avoidance. Full article
(This article belongs to the Special Issue Advanced Optical and Optomechanical Sensors)
18 pages, 4933 KB  
Article
6DoF Pose Estimation of Transparent Objects: Dataset and Method
by Yunhe Wang, Ting Wu and Qin Zou
Sensors 2026, 26(3), 898; https://doi.org/10.3390/s26030898 - 29 Jan 2026
Abstract
6DoF pose estimation is one of the key technologies for robotic grasping. Due to the lack of texture, most existing 6DoF pose estimation methods perform poorly on transparent objects. In this work, a hierarchical feature fusion network, HFF6DoF, is proposed for 6DoF pose [...] Read more.
6DoF pose estimation is one of the key technologies for robotic grasping. Due to the lack of texture, most existing 6DoF pose estimation methods perform poorly on transparent objects. In this work, a hierarchical feature fusion network, HFF6DoF, is proposed for 6DoF pose estimation of transparent objects. In HFF6DoF, appearance and geometry features are extracted from RGB-D images with a dual-branch network, and are hierarchically fused for information aggregation. A decoding module is introduced for semantic segmentation and keypoint vector-field prediction. Based on the results of semantic segmentation and keypoint prediction, 6DoF poses of transparent objects are calculated by using Random Sample Consensus (RANSAC) and Least-Squares Fitting. In addition, a new transparent-object 6DoF pose estimation dataset, TDoF20, is constructed, which consists of 61,886 pairs of RGB and depth images covering 20 types of objects. The experimental results show that the proposed HFF6DoF outperforms state-of-the-art approaches on the TDoF20 dataset by a large margin, achieving an average ADD of 50.5%. Full article
18 pages, 52908 KB  
Article
M2UNet: A Segmentation-Guided GAN with Attention-Enhanced U2-Net for Face Unmasking
by Mohamed Mahmoud, Mostafa Farouk Senussi, Mahmoud Abdalla, Mahmoud SalahEldin Kasem and Hyun-Soo Kang
Mathematics 2026, 14(3), 477; https://doi.org/10.3390/math14030477 - 29 Jan 2026
Abstract
Face unmasking is a critical task in image restoration, as masks conceal essential facial features like the mouth, nose, and chin. Current inpainting methods often struggle with structural fidelity when handling large-area occlusions, leading to blurred or inconsistent results. To address this gap, [...] Read more.
Face unmasking is a critical task in image restoration, as masks conceal essential facial features like the mouth, nose, and chin. Current inpainting methods often struggle with structural fidelity when handling large-area occlusions, leading to blurred or inconsistent results. To address this gap, we propose the Masked-to-Unmasked Network (M2UNet), a segmentation-guided generative framework. M2UNet leverages a segmentation-derived mask prior to accurately localize occluded regions and employs a multi-scale, attention-enhanced generator to restore fine-grained facial textures. The framework focuses on producing visually and semantically plausible reconstructions that preserve the structural logic of the face. Evaluated on a synthetic masked-face dataset derived from CelebA, M2UNet achieves state-of-the-art performance with a PSNR of 31.3375 dB and an SSIM of 0.9576. These results significantly outperform recent inpainting methods while maintaining high computational efficiency. Full article
Show Figures

Figure 1

20 pages, 1953 KB  
Article
A Monocular Depth Estimation Method for Autonomous Driving Vehicles Based on Gaussian Neural Radiance Fields
by Ziqin Nie, Zhouxing Zhao, Jieying Pan, Yilong Ren, Haiyang Yu and Liang Xu
Sensors 2026, 26(3), 896; https://doi.org/10.3390/s26030896 - 29 Jan 2026
Abstract
Monocular depth estimation is one of the key tasks in autonomous driving, which derives depth information of the scene from a single image. And it is a fundamental component for vehicle decision-making and perception. However, approaches currently face challenges such as visual artifacts, [...] Read more.
Monocular depth estimation is one of the key tasks in autonomous driving, which derives depth information of the scene from a single image. And it is a fundamental component for vehicle decision-making and perception. However, approaches currently face challenges such as visual artifacts, scale ambiguity and occlusion handling. These limitations lead to suboptimal performance in complex environments, reducing model efficiency and generalization and hindering their broader use in autonomous driving and other applications. To solve these challenges, this paper introduces a Neural Radiance Field (NeRF)-based monocular depth estimation method for autonomous driving. It introduces a Gaussian probability-based ray sampling strategy to effectively solve the problem of massive sampling points in large complex scenes and reduce computational costs. To improve generalization, a lightweight spherical network incorporating a fine-grained adaptive channel attention mechanism is designed to capture detailed pixel-level features. These features are subsequently mapped to 3D spatial sampling locations, resulting in diverse and expressive point representations for improving the generalizability of the NeRF model. Our approach exhibits remarkable performance on the KITTI benchmark, surpassing traditional methods in depth estimation tasks. This work contributes significant technical advancements for practical monocular depth estimation in autonomous driving applications. Full article
17 pages, 1874 KB  
Article
A Large-Kernel and Scale-Aware 2D CNN with Boundary Refinement for Multimodal Ischemic Stroke Lesion Segmentation
by Omar Ibrahim Alirr
Eng 2026, 7(2), 59; https://doi.org/10.3390/eng7020059 - 29 Jan 2026
Abstract
Accurate segmentation of ischemic stroke lesions from multimodal magnetic resonance imaging (MRI) is fundamental for quantitative assessment, treatment planning, and outcome prediction; yet, it remains challenging due to highly heterogeneous lesion morphology, low lesion–background contrast, and substantial variability across scanners and protocols. This [...] Read more.
Accurate segmentation of ischemic stroke lesions from multimodal magnetic resonance imaging (MRI) is fundamental for quantitative assessment, treatment planning, and outcome prediction; yet, it remains challenging due to highly heterogeneous lesion morphology, low lesion–background contrast, and substantial variability across scanners and protocols. This work introduces Tri-UNetX-2D, a large-kernel and scale-aware 2D convolutional network with explicit boundary refinement for automated ischemic stroke lesion segmentation from DWI, ADC, and FLAIR MRI. The architecture is built on a compact U-shaped encoder–decoder backbone and integrates three key components: first, a Large-Kernel Inception (LKI) module that employs factorized depthwise separable convolutions and dilation to emulate very large receptive fields, enabling efficient long-range context modeling; second, a Scale-Aware Fusion (SAF) unit that learns adaptive weights to fuse encoder and decoder features, dynamically balancing coarse semantic context and fine structural detail; and third, a Boundary Refinement Head (BRH) that provides explicit contour supervision to sharpen lesion borders and reduce boundary error. Squeeze-and-Excitation (SE) attention is embedded within LKI and decoder stages to recalibrate channel responses and emphasize modality-relevant cues, such as DWI-dominant acute core and FLAIR-dominant subacute changes. On the ISLES 2022 multi-center benchmark, Tri-UNetX-2D improves Dice Similarity Coefficient from 0.78 to 0.86, reduces the 95th-percentile Hausdorff distance from 12.4 mm to 8.3 mm, and increases the lesion-wise F1-score from 0.71 to 0.81 compared with a plain 2D U-Net trained under identical conditions. These results demonstrate that the proposed framework achieves competitive performance with substantially lower complexity than typical 3D or ensemble-based models, highlighting its potential for scalable, clinically deployable stroke lesion segmentation. Full article
Show Figures

Figure 1

23 pages, 4501 KB  
Article
Complexity-Driven Adversarial Validation for Corrupted Medical Imaging Data
by Diego Renza, Jorge Brieva and Ernesto Moya-Albor
Information 2026, 17(2), 125; https://doi.org/10.3390/info17020125 - 29 Jan 2026
Abstract
Distribution shifts commonly arise in real-world machine learning scenarios in which the fundamental assumption that training and test data are drawn from independent and identically distributed samples is violated. In the case of medical data, such distribution shifts often occur during data acquisition [...] Read more.
Distribution shifts commonly arise in real-world machine learning scenarios in which the fundamental assumption that training and test data are drawn from independent and identically distributed samples is violated. In the case of medical data, such distribution shifts often occur during data acquisition and pose a significant challenge to the robustness and reliability of artificial intelligence systems in clinical practice. Additionally, quantifying these shifts without training a model remains a key open problem. This paper proposes a comprehensive methodological framework for evaluating the impact of such shifts on medical image datasets under artificial transformations that simulate acquisition variations, leveraging the Cumulative Spectral Gradient (CSG) score as a measure of multiclass classification complexity induced by distributional changes. Building on prior work, the proposed approach is meaningfully extended to twelve 2D medical imaging benchmarks from the MedMNIST collection, covering both binary and multiclass tasks, as well as grayscale and RGB modalities. We evaluate the metric analyzing its robustness to clinically inspired distribution shifts that are systematically simulated through motion blur, additive noise, brightness and contrast variation, and sharpness variation, each applied at three severity levels. This results in a large-scale benchmark that enables a detailed analysis of how dataset characteristics, transformation types, and distortion severity influence distribution shifts. Thus, the findings show that while the metric remains generally stable under noise and focus distortions, it is highly sensitive to variations in brightness and contrast. On the other hand, the proposed methodology is compared against Cleanlab’s widely used Non-IID score on the RetinaMNIST dataset using a pre-trained ResNet-50 model, including both class-wise analysis and correlation assessment between metrics. Finally, interpretability is incorporated through class activation map analysis on BloodMNIST and its corrupted variants to support and contextualize the quantitative findings. Full article
Show Figures

Figure 1

21 pages, 3193 KB  
Article
Osteogenic Potential of 3D Bioprinted Collagen Scaffolds Enriched with Bone Marrow Stromal Cells, BMP-2, and Hydroxyapatite in a Rabbit Calvarial Defect Model
by Diyana Vladova, Yordan Sbirkov, Elena Stoyanova, Tsvetan Chaprazov, Kiril K. Dimitrov, Hristo Hristov, Dimitar Kostov, Petya Veleva, Daniela Stoeva and Victoria Sarafian
J. Funct. Biomater. 2026, 17(2), 68; https://doi.org/10.3390/jfb17020068 - 29 Jan 2026
Abstract
This study investigates the effect of three-dimensional (3D) bioprinted collagen (Col) scaffolds (2% w/v collagen) loaded with autologous bone marrow stromal cells (BMSCs) and enriched with bone morphogenetic protein-2 (BMP-2) and hydroxyapatite-based particles (HAPPs) on bone regeneration in calvarial defects in [...] Read more.
This study investigates the effect of three-dimensional (3D) bioprinted collagen (Col) scaffolds (2% w/v collagen) loaded with autologous bone marrow stromal cells (BMSCs) and enriched with bone morphogenetic protein-2 (BMP-2) and hydroxyapatite-based particles (HAPPs) on bone regeneration in calvarial defects in rabbits. Three implant formulations, Col-(BMP-2) (at a concentration of 80 ng/mL), Col-HAPP (1% w/v) and a mixture of the two—Col-(BMP-2)-HAPP (40 ng/mL final concentration and 0.5% HAPP), were compared with a control group C-Per containing only periosteum to assess the influence of material structure, biochemical signals and cell component on osteogenesis. Histological analysis and quantitative computed tomography (CT) imaging parameters (HU values and residual defect diameter) showed significant differences between the groups, highlighting the role of combined strategies for optimal bone repair. The control group demonstrated the weakest regeneration, expressed by minimal lamellar bone and the largest residual defect. Col-(BMP-2) stimulated moderate osteoinduction with active osteoblasts but without a fully organised lamellar structure. Col-HAΡΡ provided more advanced regeneration, with histologically observed thick osteoid lamellae, early calcification, and structured lamellar architecture, emphasising the osteoconductive role of HAΡΡs. The strongest regeneration was reported with Col-(BMP-2)-HAΡΡ, where the synergy between BMP-2, HAΡΡs and BMSCs resulted in formed osteons, well-developed cancellous bone and minimal residual defects. The established negative correlation between bone density and residual calvarial defects emphasises the relationship between mineralisation and the degree of defect filling. The new data presented demonstrate that the combination of the abovementioned structural, biochemical and cellular factors in 3D bioprinted scaffolds offers a promising strategy for osteoregeneration of complex bone defects. Full article
(This article belongs to the Section Bone Biomaterials)
Show Figures

Graphical abstract

18 pages, 4545 KB  
Article
3D Medical Image Segmentation with 3D Modelling
by Mária Ždímalová, Kristína Boratková, Viliam Sitár, Ľudovít Sebö, Viera Lehotská and Michal Trnka
Bioengineering 2026, 13(2), 160; https://doi.org/10.3390/bioengineering13020160 - 29 Jan 2026
Abstract
Background/Objectives: The segmentation of three-dimensional radiological images constitutes a fundamental task in medical image processing for isolating tumors from complex datasets in computed tomography or magnetic resonance imaging. Precise visualization, volumetry, and treatment monitoring are enabled, which are critical for oncology diagnostics and [...] Read more.
Background/Objectives: The segmentation of three-dimensional radiological images constitutes a fundamental task in medical image processing for isolating tumors from complex datasets in computed tomography or magnetic resonance imaging. Precise visualization, volumetry, and treatment monitoring are enabled, which are critical for oncology diagnostics and planning. Volumetric analysis surpasses standard criteria by detecting subtle tumor changes, thereby aiding adaptive therapies. The objective of this study was to develop an enhanced, interactive Graphcut algorithm for 3D DICOM segmentation, specifically designed to improve boundary accuracy and 3D modeling of breast and brain tumors in datasets with heterogeneous tissue intensities. Methods: The standard Graphcut algorithm was augmented with a clustering mechanism (utilizing k = 2–5 clusters) to refine boundary detection in tissues with varying intensities. DICOM datasets were processed into 3D volumes using pixel spacing and slice thickness metadata. User-defined seeds were utilized for tumor and background initialization, constrained by bounding boxes. The method was implemented in Python 3.13 using the PyMaxflow library for graph optimization and pydicom for data transformation. Results: The proposed segmentation method outperformed standard thresholding and region growing techniques, demonstrating reduced noise sensitivity and improved boundary definition. An average Dice Similarity Coefficient (DSC) of 0.92 ± 0.07 was achieved for brain tumors and 0.90 ± 0.05 for breast tumors. These results were found to be comparable to state-of-the-art deep learning benchmarks (typically ranging from 0.84 to 0.95), achieved without the need for extensive pre-training. Boundary edge errors were reduced by a mean of 7.5% through the integration of clustering. Therapeutic changes were quantified accurately (e.g., a reduction from 22,106 mm3 to 14,270 mm3 post-treatment) with an average processing time of 12–15 s per stack. Conclusions: An efficient, precise 3D tumor segmentation tool suitable for diagnostics and planning is presented. This approach is demonstrated to be a robust, data-efficient alternative to deep learning, particularly advantageous in clinical settings where the large annotated datasets required for training neural networks are unavailable. Full article
(This article belongs to the Section Biosignal Processing)
Show Figures

Graphical abstract

20 pages, 6183 KB  
Article
Assessing the Ecological Benefits of Urban Green Spaces Based on 3D Green Quantity: A Case Study of Xi’an, China
by Fengxia Li, Chao Wu, Xiaogang Feng and Meng Li
Sustainability 2026, 18(3), 1331; https://doi.org/10.3390/su18031331 - 28 Jan 2026
Abstract
The ecological benefits of urban green spaces depend on their structure and ecological service function. Evaluation systems used to monitor these characteristics show distinct regional variations. This study analyzed China’s urban green spaces, developed a quantitative ecological benefit evaluation system, and comprehensively evaluated [...] Read more.
The ecological benefits of urban green spaces depend on their structure and ecological service function. Evaluation systems used to monitor these characteristics show distinct regional variations. This study analyzed China’s urban green spaces, developed a quantitative ecological benefit evaluation system, and comprehensively evaluated the ecological benefits of green spaces in Xi’an city. Suitable evaluation indexes for Xi’an were selected based on field survey data with large-scale samples and high-resolution remote sensing image data. The results showed that the ecological service function of urban green spaces in Xi’an has been substantially improved by ecological planning. Therefore, it is important to evaluate this function as part of the urban planning and design process. Furthermore, increasing the 3D Green Quantity through urban forests can effectively improve the ecological service function. Full article
Show Figures

Figure 1

20 pages, 9147 KB  
Article
Model Test Study on Group Under-Reamed Anchors Under Cyclic Loading
by Chen Chen, Zhe Liu and Junchao Yang
Buildings 2026, 16(3), 540; https://doi.org/10.3390/buildings16030540 - 28 Jan 2026
Abstract
This study conducted laboratory model tests, integrated with Particle Image Velocimetry (PIV) technology, to investigate the evolution of the uplift bearing capacity of an under-reamed anchor group subjected to cyclic loading. The tests considered various working conditions, including different spacing ratios (S [...] Read more.
This study conducted laboratory model tests, integrated with Particle Image Velocimetry (PIV) technology, to investigate the evolution of the uplift bearing capacity of an under-reamed anchor group subjected to cyclic loading. The tests considered various working conditions, including different spacing ratios (S/D = 4, 5, 6, where S was the center-to-center spacing and D was the diameter of the under-reamed body), varying cyclic amplitude ratios (λ = 0.3, 0.5, 0.6, 0.7, 0.8) and different cycle times (M = 1, 5, 10, 30). PIV was utilized to observe the displacement field of the surrounding soil, revealing the group effect of the anchors and the variation in their uplift capacity under diverse cyclic amplitudes and cyclic times. The results indicated that the load–displacement curves could be delineated into three distinct stages: elastic, elastoplastic, and plastic. Notably, the group effect primarily initiated during the elastoplastic stage and developed significantly within the plastic stage. The cyclic amplitude ratio was identified as a key factor influencing the uplift capacity. Furthermore, compared to results from single pull-out tests, both the vertical displacement of the surrounding soil and the shear strength of the sidewall adjacent to the under-reamed body decreased following cyclic loading. Finally, the influence of the cyclic times depended on the occurrence of anchor failure; in the absence of failure, the anchor maintained satisfactory performance even after multiple cycles. Full article
(This article belongs to the Special Issue Advanced Applications of AI-Driven Structural Control)
Show Figures

Figure 1

37 pages, 24380 KB  
Article
Denoising of CT and MRI Images Using Decomposition-Based Curvelet Thresholding and Classical Filtering Techniques
by Mahmoud Nasr, Krzysztof Brzostowski, Rafał Obuchowicz and Adam Piórkowski
Appl. Sci. 2026, 16(3), 1335; https://doi.org/10.3390/app16031335 - 28 Jan 2026
Abstract
Medical image denoising is crucial for enhancing the diagnostic accuracy of CT and MRI images. This paper presents a modular hybrid framework that combines multiscale decomposition techniques (Empirical Mode Decomposition, Variational Mode Decomposition, Bidimensional EMD, and Multivariate EMD) with curvelet transform thresholding and [...] Read more.
Medical image denoising is crucial for enhancing the diagnostic accuracy of CT and MRI images. This paper presents a modular hybrid framework that combines multiscale decomposition techniques (Empirical Mode Decomposition, Variational Mode Decomposition, Bidimensional EMD, and Multivariate EMD) with curvelet transform thresholding and traditional spatial filters. The methodology was assessed using a phantom dataset containing regulated Rician noise, clinical CT images rebuilt with sharp (B50f) and medium (B46f) kernels, and MRI scans obtained at various GRAPPA acceleration factors. In phantom trials, MEMD–Curvelet attained the highest SSIM (0.964) and PSNR (28.35 dB), while preserving commendable perceptual scores (NIQE approximately 7.55, BRISQUE around 38.8). In CT images, VMD–Curvelet and MEMD–Curvelet consistently outperformed classical filters, achieving SSIM values over 0.95 and PSNR values above 28 dB, even with sharp-kernel reconstructions. In MRI datasets, MEMD–Curvelet and BEMD–Curvelet reduced perceptual distortion, decreasing NIQE by up to 15% and BRISQUE by 20% compared to Gaussian and median filtering. Deep learning baselines validated the framework’s competitiveness: BM3D attained high fidelity but necessitated 6.65 s per slice, while DnCNN delivered equivalent SSIM (0.958) with a diminished runtime of 2.33 s. The results indicate that the proposed framework excels at noise reduction and structure preservation across various imaging settings, surpassing independent filtering and transform-only methods. Its versatility and efficiency underscore its potential for therapeutic integration in situations necessitating high-quality denoising under limited acquisition conditions. Full article
19 pages, 1691 KB  
Article
Development of a Framework for Echocardiographic Image Quality Assessment and Its Application in CRT-D/ICD Patients
by Wojciech Nazar, Damian Kaufmann, Elżbieta Wabich, Justyna Rohun and Ludmiła Daniłowicz-Szymanowicz
J. Clin. Med. 2026, 15(3), 1055; https://doi.org/10.3390/jcm15031055 - 28 Jan 2026
Abstract
Background/Objectives: Low image quality reduces diagnostic accuracy. We wanted to develop a framework for assessing transthoracic echocardiography (TTE) image quality in apical 2-, 3-, and 4-chamber views, and to use this framework to characterise segment-level visualisation patterns in patients with heart failure (HF). [...] Read more.
Background/Objectives: Low image quality reduces diagnostic accuracy. We wanted to develop a framework for assessing transthoracic echocardiography (TTE) image quality in apical 2-, 3-, and 4-chamber views, and to use this framework to characterise segment-level visualisation patterns in patients with heart failure (HF). Methods: In this cross-sectional study, 268 TTE examinations from 230 patients qualified for ICD/CRT implantation in primary prevention of sudden cardiac death were analysed. Patient demographic, electrocardiographic, echocardiographic, and clinical characteristics were collected, and apical 2-, 3-, and 4-chamber views were extracted for image quality evaluation. Mean scores for each segment were calculated. The proportion of well-visualised segments per view was also evaluated. Risk factors for poor image quality were assessed. Results: We internally assessed the reliability of the framework (intra-class correlation coefficient > 0.9). The anterior and anterolateral walls consistently demonstrated the poorest quality, and the inferior segments the best. Clear inner-edge-to-outer-edge delineation of ≥5 segmental borders was achieved in only 30% of studies, while ≥5 endocardial border segments were visualised in 65% of cases. Reduced quality was frequently observed in patients with higher BMI and BSA, presence of HF risk factors (diabetes, prior myocardial infarction, and atrial fibrillation), and heart abnormalities (increased left ventricular end-diastolic value and hypokinesis). Conclusions: The prevalence of imaging challenges in TTE examinations performed in patients qualified for CRT-D/ICD implantation is high. These findings underscore the need for thorough training of echocardiographers and for sustained attention to technical details affecting image quality to achieve consistently high-quality images in routine practice. Full article
(This article belongs to the Section Cardiology)
Show Figures

Graphical abstract

49 pages, 7642 KB  
Article
Neuro-Geometric Graph Transformers with Differentiable Radiographic Geometry for Spinal X-Ray Image Analysis
by Vuth Kaveevorayan, Rapeepan Pitakaso, Thanatkij Srichok, Natthapong Nanthasamroeng, Chutchai Kaewta and Peerawat Luesak
J. Imaging 2026, 12(2), 59; https://doi.org/10.3390/jimaging12020059 - 28 Jan 2026
Abstract
Radiographic imaging remains a cornerstone of diagnostic practice. However, accurate interpretation faces challenges from subtle visual signatures, anatomical variability, and inter-observer inconsistency. Conventional deep learning approaches, such as convolutional neural networks and vision transformers, deliver strong predictive performance but often lack anatomical grounding [...] Read more.
Radiographic imaging remains a cornerstone of diagnostic practice. However, accurate interpretation faces challenges from subtle visual signatures, anatomical variability, and inter-observer inconsistency. Conventional deep learning approaches, such as convolutional neural networks and vision transformers, deliver strong predictive performance but often lack anatomical grounding and interpretability, limiting their trustworthiness in imaging applications. To address these challenges, we present SpineNeuroSym, a neuro-geometric imaging framework that unifies geometry-aware learning and symbolic reasoning for explainable medical image analysis. The framework integrates weakly supervised keypoint and region-of-interest discovery, a dual-stream graph–transformer backbone, and a Differentiable Radiographic Geometry Module (dRGM) that computes clinically relevant indices (e.g., slip ratio, disc asymmetry, sacroiliac spacing, and curvature measures). A Neuro-Symbolic Constraint Layer (NSCL) enforces monotonic logic in image-derived predictions, while a Counterfactual Geometry Diffusion (CGD) module generates rare imaging phenotypes and provides diagnostic auditing through counterfactual validation. Evaluated on a comprehensive dataset of 1613 spinal radiographs from Sunpasitthiprasong Hospital encompassing six diagnostic categories—spondylolisthesis (n = 496), infection (n = 322), spondyloarthropathy (n = 275), normal cervical (n = 192), normal thoracic (n = 70), and normal lumbar spine (n = 258)—SpineNeuroSym achieved 89.4% classification accuracy, a macro-F1 of 0.872, and an AUROC of 0.941, outperforming eight state-of-the-art imaging baselines. These results highlight how integrating neuro-geometric modeling, symbolic constraints, and counterfactual validation advances explainable, trustworthy, and reproducible medical imaging AI, establishing a pathway toward transparent image analysis systems. Full article
(This article belongs to the Special Issue Advances in Machine Learning for Medical Imaging Applications)
Show Figures

Figure 1

21 pages, 2449 KB  
Article
Few-Shot 6D Object Pose Estimation via Decoupled Rotation and Translation with Viewpoint Encoding
by Lei Lu, Peng Cao, Wei Pan, Zhilong Su, Haojun Zhang, Wangxing Zheng, Ge Gao and Peng Li
Electronics 2026, 15(3), 561; https://doi.org/10.3390/electronics15030561 - 28 Jan 2026
Abstract
Estimating 6D object pose from monocular RGB images remains a critical yet data-intensive challenge in computer vision. In this work, we propose a novel few-shot 6D pose estimation framework that explicitly decouples rotation and translation estimation, significantly reducing dependence on large-scale annotated real-world [...] Read more.
Estimating 6D object pose from monocular RGB images remains a critical yet data-intensive challenge in computer vision. In this work, we propose a novel few-shot 6D pose estimation framework that explicitly decouples rotation and translation estimation, significantly reducing dependence on large-scale annotated real-world data. Our method employs a viewpoint encoder trained solely on synthetic data to generate a codebook for rotation retrieval, complemented by an in-plane rotation regression module. For translation, we adopt a geometry-aware regression network based on dense 2D–3D correspondences. Experimental results on LINEMOD, LM-O, and YCB-V datasets demonstrate that our approach achieves state-of-the-art performance (97.6%, 65.3%, and 65.9% ADD(-S), respectively), using only 600 real images per object—cutting real data requirements by 80% compared to typical fully-supervised 6D pose estimation methods. These findings highlight the effectiveness and generalization ability of our method under limited supervision. Full article
Show Figures

Figure 1

Back to TopTop