Sign in to use this feature.

Years

Between: -

Subjects

remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline

Journals

Article Types

Countries / Regions

remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline

Search Results (142)

Search Parameters:
Keywords = image sharpness assessment

Order results
Result details
Results per page
Select all
Export citation of selected articles as:
19 pages, 17084 KB  
Article
SPADE: Superpixel Adjacency Driven Embedding for Three-Class Melanoma Segmentation
by Pablo Ordóñez, Ying Xie, Xinyue Zhang, Chloe Yixin Xie, Santiago Acosta and Issac Guitierrez
Algorithms 2025, 18(9), 551; https://doi.org/10.3390/a18090551 - 2 Sep 2025
Viewed by 592
Abstract
The accurate segmentation of pigmented skin lesions is a critical prerequisite for reliable melanoma detection, yet approximately 30% of lesions exhibit fuzzy or poorly defined borders. This ambiguity makes the definition of a single contour unreliable and limits the effectiveness of computer-assisted diagnosis [...] Read more.
The accurate segmentation of pigmented skin lesions is a critical prerequisite for reliable melanoma detection, yet approximately 30% of lesions exhibit fuzzy or poorly defined borders. This ambiguity makes the definition of a single contour unreliable and limits the effectiveness of computer-assisted diagnosis (CAD) systems. While clinical assessment based on the ABCDE criteria (asymmetry, border, color, diameter, and evolution), dermoscopic imaging, and scoring systems remains the standard, these methods are inherently subjective and vary with clinician experience. We address this challenge by reframing segmentation into three distinct regions: background, border, and lesion core. These regions are delineated using superpixels generated via the Simple Linear Iterative Clustering (SLIC) algorithm, which provides meaningful structural units for analysis. Our contributions are fourfold: (1) redefining lesion borders as regions, rather than sharp lines; (2) generating superpixel-level embeddings with a transformer-based autoencoder; (3) incorporating these embeddings as features for superpixel classification; and (4) integrating neighborhood information to construct enhanced feature vectors. Unlike pixel-level algorithms that often overlook boundary context, our pipeline fuses global class information with local spatial relationships, significantly improving precision and recall in challenging border regions. An evaluation on the HAM10000 melanoma dataset demonstrates that our superpixel–RAG–transformer (region adjacency graph) pipeline achieves exceptional performance (100% F1 score, accuracy, and precision) in classifying background, border, and lesion core superpixels. By transforming raw dermoscopic images into region-based structured representations, the proposed method generates more informative inputs for downstream deep learning models. This strategy not only advances melanoma analysis but also provides a generalizable framework for other medical image segmentation and classification tasks. Full article
Show Figures

Figure 1

37 pages, 3806 KB  
Article
Comparative Evaluation of CNN and Transformer Architectures for Flowering Phase Classification of Tilia cordata Mill. with Automated Image Quality Filtering
by Bogdan Arct, Bartosz Świderski, Monika A. Różańska, Bogdan H. Chojnicki, Tomasz Wojciechowski, Gniewko Niedbała, Michał Kruk, Krzysztof Bobran and Jarosław Kurek
Sensors 2025, 25(17), 5326; https://doi.org/10.3390/s25175326 - 27 Aug 2025
Viewed by 915
Abstract
Understanding and monitoring the phenological phases of trees is essential for ecological research and climate change studies. In this work, we present a comprehensive evaluation of state-of-the-art convolutional neural networks (CNNs) and transformer architectures for the automated classification of the flowering phase of [...] Read more.
Understanding and monitoring the phenological phases of trees is essential for ecological research and climate change studies. In this work, we present a comprehensive evaluation of state-of-the-art convolutional neural networks (CNNs) and transformer architectures for the automated classification of the flowering phase of Tilia cordata Mill. (small-leaved lime) based on a large set of real-world images acquired under natural field conditions. The study introduces a novel, automated image quality filtering approach using an XGBoost classifier trained on diverse exposure and sharpness features to ensure robust input data for subsequent deep learning models. Seven modern neural network architectures, including VGG16, ResNet50, EfficientNetB3, MobileNetV3 Large, ConvNeXt Tiny, Vision Transformer (ViT-B/16), and Swin Transformer Tiny, were fine-tuned and evaluated under a rigorous cross-validation protocol. All models achieved excellent performance, with cross-validated F1-scores exceeding 0.97 and balanced accuracy up to 0.993. The best results were obtained for ResNet50 and ConvNeXt Tiny (F1-score: 0.9879 ± 0.0077 and 0.9860 ± 0.0073, balanced accuracy: 0.9922 ± 0.0054 and 0.9927 ± 0.0042, respectively), indicating outstanding sensitivity and specificity for both flowering and non-flowering classes. Classical CNNs (VGG16, ResNet50, and ConvNeXt Tiny) demonstrated slightly superior robustness compared to transformer-based models, though all architectures maintained high generalization and minimal variance across folds. The integrated quality assessment and classification pipeline enables scalable, high-throughput monitoring of flowering phases in natural environments. The proposed methodology is adaptable to other plant species and locations, supporting future ecological monitoring and climate studies. Our key contributions are as follows: (i) introducing an automated exposure-quality filtering stage for field imagery; (ii) publishing a curated, season-long dataset of Tilia cordata images; and (iii) providing the first systematic cross-validated benchmark that contrasts classical CNNs with transformer architectures for phenological phase recognition. Full article
(This article belongs to the Special Issue Application of UAV and Sensing in Precision Agriculture)
Show Figures

Figure 1

24 pages, 625 KB  
Article
Quantitative Ultrasound-Based Precision Diagnosis of Papillary, Follicular, and Medullary Thyroid Carcinomas Using Morphological, Structural, and Textural Features
by Hanna Piotrzkowska Wróblewska, Piotr Karwat, Agnieszka Żyłka, Katarzyna Dobruch Sobczak, Marek Dedecjus and Jerzy Litniewski
Cancers 2025, 17(17), 2761; https://doi.org/10.3390/cancers17172761 - 24 Aug 2025
Viewed by 762
Abstract
Background/Objectives: Thyroid cancer encompasses distinct histological subtypes with varying biological behavior and treatment implications. Accurate preoperative subtype differentiation remains challenging. Although ultrasound (US) is widely used for thyroid nodule evaluation, qualitative assessment alone is often insufficient to distinguish between papillary (PTC), follicular [...] Read more.
Background/Objectives: Thyroid cancer encompasses distinct histological subtypes with varying biological behavior and treatment implications. Accurate preoperative subtype differentiation remains challenging. Although ultrasound (US) is widely used for thyroid nodule evaluation, qualitative assessment alone is often insufficient to distinguish between papillary (PTC), follicular (FTC), and medullary thyroid carcinoma (MTC). Methods: A retrospective analysis was performed on patients with histologically confirmed PTC, FTC, or MTC. A total of 224 standardized B-mode ultrasound images were analyzed. A set of fully quantitative features was extracted, including morphological characteristics (aspect ratio and perimeter-to-area ratio), internal echotexture (echogenicity and local entropy), boundary sharpness (gradient measures and KL divergence), and structural components (calcifications and cystic areas). Feature extraction was conducted using semi-automatic algorithms implemented in MATLAB. Statistical differences were assessed using the Kruskal–Wallis and Dunn–Šidák tests. A Random Forest classifier was trained and evaluated to determine the discriminatory performance of individual and combined features. Results: Significant differences (p < 0.05) were found among subtypes for key features such as perimeter-to-area ratio, normalized echogenicity, and calcification pattern. The full-feature Random Forest model achieved an overall classification accuracy of 89.3%, with F1-scores of 93.4% for PTC, 85.7% for MTC, and 69.1% for FTC. A reduced model using the top 10 features yielded an even higher accuracy of 91.8%, confirming the robustness and clinical relevance of the selected parameters. Conclusions: Subtype classification of thyroid cancer was effectively performed using quantitative ultrasound features and machine learning. The results suggest that biologically interpretable image-derived metrics may assist in preoperative decision-making and potentially reduce the reliance on invasive diagnostic procedures. Full article
(This article belongs to the Special Issue Thyroid Cancer: New Advances from Diagnosis to Therapy: 2nd Edition)
Show Figures

Figure 1

13 pages, 1996 KB  
Article
Deep Learning-Enhanced T1-Weighted Imaging for Breast MRI at 1.5T
by Susann-Cathrin Olthof, Marcel Dominik Nickel, Elisabeth Weiland, Daniel Leyhr, Saif Afat, Konstantin Nikolaou and Heike Preibsch
Diagnostics 2025, 15(13), 1681; https://doi.org/10.3390/diagnostics15131681 - 1 Jul 2025
Cited by 1 | Viewed by 702
Abstract
Background/Objectives: Assessment of a novel deep-learning (DL)-based T1w volumetric interpolated breath-hold (VIBEDL) sequence in breast MRI in comparison with standard VIBE (VIBEStd) for image quality evaluation. Methods: Prospective study of 52 breast cancer patients examined at 1.5T [...] Read more.
Background/Objectives: Assessment of a novel deep-learning (DL)-based T1w volumetric interpolated breath-hold (VIBEDL) sequence in breast MRI in comparison with standard VIBE (VIBEStd) for image quality evaluation. Methods: Prospective study of 52 breast cancer patients examined at 1.5T breast MRI with T1w VIBEStd and T1 VIBEDL sequence. T1w VIBEDL was integrated as an additional early non-contrast and a delayed post-contrast scan. Two radiologists independently scored T1w VIBE Std/DL sequences both pre- and post-contrast and their calculated subtractions (SUBs) for image quality, sharpness, (motion)–artifacts, perceived signal-to-noise and diagnostic confidence with a Likert-scale from 1: Non-diagnostic to 5: Excellent. Lesion diameter was evaluated on the SUB for T1w VIBEStd/DL. All lesions were visually evaluated in T1w VIBEStd/DL pre- and post-contrast and their subtractions. Statistics included correlation analyses and paired t-tests. Results: Significantly higher Likert scale values were detected in the pre-contrast T1w VIBEDL compared to the T1w VIBEStd for image quality (each p < 0.001), image sharpness (p < 0.001), SNR (p < 0.001), and diagnostic confidence (p < 0.010). Significantly higher values for image quality (p < 0.001 in each case), image sharpness (p < 0.001), SNR (p < 0.001), and artifacts (p < 0.001) were detected in the post-contrast T1w VIBEDL and in the SUB. SUBDL provided superior diagnostic certainty compared to SUBStd in one reader (p = 0.083 or p = 0.004). Conclusions: Deep learning-enhanced T1w VIBEDL at 1.5T breast MRI offers superior image quality compared to T1w VIBEStd. Full article
(This article belongs to the Special Issue Advances in Diagnosis and Prognosis of Breast Cancer)
Show Figures

Figure 1

11 pages, 3678 KB  
Article
Plug-and-Play Self-Supervised Denoising for Pulmonary Perfusion MRI
by Changyu Sun, Yu Wang, Cody Thornburgh, Ai-Ling Lin, Kun Qing, John P. Mugler and Talissa A. Altes
Bioengineering 2025, 12(7), 724; https://doi.org/10.3390/bioengineering12070724 - 1 Jul 2025
Viewed by 848
Abstract
Pulmonary dynamic contrast-enhanced (DCE) MRI is clinically useful for assessing pulmonary perfusion, but its signal-to-noise ratio (SNR) is limited. A self-supervised learning network-based plug-and-play (PnP) denoising model was developed to improve the image quality of pulmonary perfusion MRI. A dataset of patients with [...] Read more.
Pulmonary dynamic contrast-enhanced (DCE) MRI is clinically useful for assessing pulmonary perfusion, but its signal-to-noise ratio (SNR) is limited. A self-supervised learning network-based plug-and-play (PnP) denoising model was developed to improve the image quality of pulmonary perfusion MRI. A dataset of patients with suspected pulmonary diseases was used. Asymmetric pixel-shuffle downsampling blind-spot network (AP-BSN) training inputs were two-dimensional background-subtracted perfusion images without clean ground truth. The AP-BSN is incorporated into a PnP model (PnP-BSN) for balancing noise control and image fidelity. Model performance was evaluated by SNR, sharpness, and overall image quality from two radiologists. The fractal dimension and k-means segmentation of the pulmonary perfusion images were calculated for comparing denoising performance. The model was trained on 29 patients and tested on 8 patients. The performance of PnP-BSN was compared to denoising convolutional neural network (DnCNN) and a Gaussian filter. PnP-BSN showed the highest reader scores in terms of SNR, sharpness, and overall image quality as scored by two radiologists. The expert scoring results for DnCNN, Gaussian, and PnP-BSN were 2.25 ± 0.65, 2.44 ± 0.73, and 3.56 ± 0.73 for SNR; 2.62 ± 0.52, 2.62 ± 0.52, and 3.38 ± 0.64 for sharpness; and 2.16 ± 0.33, 2.34 ± 0.42, and 3.53 ± 0.51 for overall image quality (p < 0.05 for all). PnP-BSN outperformed DnCNN and a Gaussian filter for denoising pulmonary perfusion MRI, which led to improved quantitative fractal analysis. Full article
Show Figures

Figure 1

13 pages, 3883 KB  
Article
Optimizing Imaging Parameters for Assessment of Hepatocellular Carcinoma Using Photon-Counting Detector Computed Tomography—Impact of Reconstruction Kernel and Slice Thickness
by Anna Szelenyi, Philipp Stelzer, Christian Wassipaul, Jakob Kittinger, Andreas Strassl, Victor Schmidbauer, Martin Luther Watzenböck, Florian Lindenlaub, Michael Arnoldner, Michael Weber, Matthias Pinter, Ruxandra-Iulia Milos and Dietmar Tamandl
Tomography 2025, 11(7), 77; https://doi.org/10.3390/tomography11070077 - 27 Jun 2025
Viewed by 910
Abstract
Background: The use of photon-counting detector computed tomography (PCD-CT) has improved image quality in cardiac, pulmonary, and musculoskeletal imaging. Abdominal imaging research, especially about the use of PCD-CT in hepatocellular carcinoma (HCC), is sparse. Objectives: We aimed to compare the image quality of [...] Read more.
Background: The use of photon-counting detector computed tomography (PCD-CT) has improved image quality in cardiac, pulmonary, and musculoskeletal imaging. Abdominal imaging research, especially about the use of PCD-CT in hepatocellular carcinoma (HCC), is sparse. Objectives: We aimed to compare the image quality of tumors, the liver parenchyma, and the vasculature in patients with HCC using PCD-CT reconstructions at different slice thicknesses and kernels to identify the most appropriate settings for the clinical routine. Methods: CT exams from twenty adult patients with HCC performed with a clinically approved, first-generation PCD-CT scanner (Naeotom Alpha®, Siemens Healthineers), were retrospectively reviewed. For each patient, images were reconstructed at four different sharp kernels, designed for abdominal imaging (Br40; Br44; Br48; Br56) and at three slice thicknesses (0.4 mm; 1 mm; 3 mm). The reconstruction with the Br40 kernel at 3 mm (Br403 mm) was used as a clinical reference. Three readers independently assessed the image quality of different anatomical abdominal structures and hypervascular HCC lesions using a five-point Likert scale. In addition, image sharpness was assessed using line-density profiles. Results: Compared with the clinical reference, the Br441 mm and Br481 mm reconstructions were rated superior for the assessment of the hepatic vasculature (median difference +0.67 [+0.33 to +1.33], p < 0.001 and +1.00 [+0.67 to +1.67], p < 0.001). Reconstructions for Br401 mm (+0.33 [−0.67 to +1.00], p < 0.001), and Br443 mm (+0.0 [0.0 to +1.00], p = 0.030) were scored superior for overall image quality. The noise demonstrated a continuous increase when using sharper kernels and thinner slices than Br403 mm (p < 0.001), leading to a decrease in contrast-to-noise ratio. Although there was a trend toward increased image sharpness using the slope analysis with higher kernels, this was not significantly different compared with the reference standard. Conclusion: PCD-CT reconstruction Br401 mm was the most suitable setting for overall image quality, while reconstructions with sharper kernels (Br441 mm and Br481 mm) can be considered for the assessment of the hepatic vasculature in patients with HCC. Full article
Show Figures

Figure 1

26 pages, 5990 KB  
Article
Efficient Image Processing Technique for Detecting Spatio-Temporal Erosion in Boron Nitride Exposed to Iodine Plasma
by Ahmed S. Afifi, Janith Weerasinghe, Karthika Prasad, Igor Levchenko and Katia Alexander
Nanomaterials 2025, 15(13), 961; https://doi.org/10.3390/nano15130961 - 21 Jun 2025
Viewed by 1361
Abstract
Erosion detection in materials exposed to plasma-generated species, such as those used for space propulsion systems, is critical for ensuring their reliability and longevity. This study introduces an efficient image processing technique to monitor the evolution of the erosion depth in boron nitride [...] Read more.
Erosion detection in materials exposed to plasma-generated species, such as those used for space propulsion systems, is critical for ensuring their reliability and longevity. This study introduces an efficient image processing technique to monitor the evolution of the erosion depth in boron nitride (BN) subjected to multiple cycles of iodine plasma exposure. Utilising atomic force microscopy (AFM) images from both untreated and treated BN samples, the technique uses a modified semi-automated image registration method that accurately aligns surface profiles—even after substantial erosion—and overcomes challenges related to changes in the eroded surface features. The registered images are then processed through frequency-domain subtraction to visualise and quantify erosion depth. Our technique tracks changes across the BN surface at multiple spatial locations and generates erosion maps at exposure durations of 24, 48, 72 and 84 min using both one-stage and multi-stage registration methods. These maps not only reveal localised material loss (up to 5.5 μm after 84 min) and assess its uniformity but also indicate potential re-deposition of etched material and redistribution across the surface through mechanisms such as diffusion. By analysing areas with higher elevations and observing plasma-treated samples over time, we notice that these elevated regions—initially the most affected—gradually decrease in size and height, while overall erosion depth increases. Progressive surface smoothing is observed with increasing iodine plasma exposure, as quantified by AFM-based erosion mapping. Notably, up to 89.3% of surface heights were concentrated near the mean after 72–84 min of plasma treatment, indicating a more even distribution of surface features compared to the untreated surface. Iodine plasma was compared to argon plasma to distinguish material loss during degradation between these two mechanisms. Iodine plasma causes more aggressive and spatially selective erosion, strongly influenced by initial surface morphology, whereas argon plasma results in milder and more uniform surface changes. Additional scale-dependent slope and curvature analyses confirm that iodine rapidly smooths fine features, whereas argon better preserves surface sharpness over time. Tracking such sharpness is critical for maintaining the fine structures essential to the fabrication of modern semiconductor components. Overall, this image processing tool offers a powerful and adaptable method for accurately assessing surface degradation and morphological changes in materials used in plasma-facing and space propulsion environments. Full article
(This article belongs to the Section Nanoelectronics, Nanosensors and Devices)
Show Figures

Graphical abstract

29 pages, 7409 KB  
Article
Quality Assessment of High-Speed Motion Blur Images for Mobile Automated Tunnel Inspection
by Chulhee Lee, Donggyou Kim and Dongku Kim
Sensors 2025, 25(12), 3804; https://doi.org/10.3390/s25123804 - 18 Jun 2025
Cited by 2 | Viewed by 1159
Abstract
This study quantitatively evaluates the impact of motion blur—caused by high-speed movement—on image quality in a mobile tunnel scanning system (MTSS). To simulate movement at speeds of up to 70 km/h, a high-speed translational motion panel was developed. Images were captured under conditions [...] Read more.
This study quantitatively evaluates the impact of motion blur—caused by high-speed movement—on image quality in a mobile tunnel scanning system (MTSS). To simulate movement at speeds of up to 70 km/h, a high-speed translational motion panel was developed. Images were captured under conditions compliant with the ISO 12233 international standard, and image quality was assessed using two metrics: blurred edge width (BEW) and the spatial frequency response at 50% contrast (MTF50). Experiments were conducted under varying shutter speeds, lighting conditions (15,000 lx and 40,000 lx), and motion speeds. The results demonstrated that increased motion speed increased BEW and decreased MTF50, indicating greater blur intensity and reduced image sharpness. Two-way analysis of variance and t-tests confirmed that shutter and motion speed significantly affected image quality. Although higher illumination levels partially improved, they also occasionally led to reduced sharpness. Field validation using MTSS in actual tunnel environments demonstrated that BEW and MTF50 effectively captured blur variations by scanning direction. This study proposes BEW and MTF50 as reliable indicators for quantitatively evaluating motion blur in tunnel inspection imagery and suggests their potential to optimize MTSS operation and improve the accuracy of automated defect detection. Full article
(This article belongs to the Section Intelligent Sensors)
Show Figures

Figure 1

20 pages, 3795 KB  
Article
Calcium Determination by Complexometric Titration with Calcein Indicator Using Webcam for Endpoint Detection
by Alexander Shyichuk, Dorota Ziółkowska, Jan Lamkiewicz and Maria Kowalska
Water 2025, 17(12), 1757; https://doi.org/10.3390/w17121757 - 11 Jun 2025
Cited by 2 | Viewed by 5139
Abstract
Precise and convenient analytical methods are needed for the quantitative determination of calcium in water and food. Complexometric titration remains a reliable technique to determine calcium in milligram amounts. The titrations have been performed automatically by detecting color transitions with a webcam. Classical [...] Read more.
Precise and convenient analytical methods are needed for the quantitative determination of calcium in water and food. Complexometric titration remains a reliable technique to determine calcium in milligram amounts. The titrations have been performed automatically by detecting color transitions with a webcam. Classical complexometric indicator calcein provided a sharp color transition. In diffuse reflection mode, the color appearance parameter (Hue) provides better precision and is more resistant to ambient light fluctuations compared to RGB primaries. In fluorescence mode with LED illumination, the fluorescence brightness of calcein is independent of ambient light, and the primary green color provides the sharpest endpoints. The color change during titration is better in the upper part of the acquired images due to the internal filter effect in calcein solutions. The automatic titration with a digital burette provides a standard deviation as low as 0.1 μmol. An example of its application is in the determination of calcium in commercial mineral waters. Based on the AGREE and ComplexMoGAPI rating scales, the semi-automatic titration showed better environmental assessment compared to the standard ASA method. Full article
(This article belongs to the Section Water Quality and Contamination)
Show Figures

Figure 1

18 pages, 7506 KB  
Article
Image Visual Quality: Sharpness Evaluation in the Logarithmic Image Processing Framework
by Arnaud Pauwelyn, Maxime Carré, Michel Jourlin, Dominique Ginhac and Fabrice Meriaudeau
Big Data Cogn. Comput. 2025, 9(6), 154; https://doi.org/10.3390/bdcc9060154 - 9 Jun 2025
Viewed by 864
Abstract
In image processing, the acquisition step plays a fundamental role because it determines image quality. The present paper focuses on the issue of blur and suggests ways of assessing contrast. The logic of this work consists in evaluating the sharpness of an image [...] Read more.
In image processing, the acquisition step plays a fundamental role because it determines image quality. The present paper focuses on the issue of blur and suggests ways of assessing contrast. The logic of this work consists in evaluating the sharpness of an image by means of objective measures based on mathematical, physical, and optical justifications in connection with the human visual system. This is why the Logarithmic Image Processing (LIP) framework was chosen. The sharpness of an image is usually assessed near objects’ boundaries, which encourages the use of gradients, with some major drawbacks. Within the LIP framework, it is possible to overcome such problems using a “contour detector” tool based on the notion of Logarithmic Additive Contrast (LAC). Considering a sequence of images increasingly blurred, we show that the use of LAC enables images to be re-classified in accordance with their defocus level, demonstrating the relevance of the method. The proposed algorithm has been shown to outperform five conventional methods for assessing image sharpness. Moreover, it is the only method that is insensitive to brightness variations. Finally, various application examples are presented, like automatic autofocus control or the comparison of two blur removal algorithms applied to the same image, which particularly concerns the field of Super Resolution (SR) algorithms. Such algorithms multiply (×2, ×3, ×4) the resolution of an image using powerful tools (deep learning, neural networks) while correcting the potential defects (blur, noise) that could be generated by the resolution extension itself. We conclude with the prospects for this work, which should be part of a broader approach to estimating image quality, including sharpness and perceived contrast. Full article
Show Figures

Figure 1

11 pages, 1374 KB  
Article
A Preemptive Scan Speed Control Strategy Based on Topographic Data for Optimized Atomic Force Microscopy Imaging
by Thi Thu Nguyen, Oyoo Michael Juma, Luke Oduor Otieno, Thi Ngoc Nguyen and Yong Joong Lee
Actuators 2025, 14(6), 262; https://doi.org/10.3390/act14060262 - 26 May 2025
Viewed by 531
Abstract
Rapid advancement in the nanotechnology and semiconductor industries has driven the demand for fast, precise measurement systems. Atomic force microscopy (AFM) is a standout metrology technique due to its high precision and wide applicability. However, when operated at high speeds, the quality of [...] Read more.
Rapid advancement in the nanotechnology and semiconductor industries has driven the demand for fast, precise measurement systems. Atomic force microscopy (AFM) is a standout metrology technique due to its high precision and wide applicability. However, when operated at high speeds, the quality of AFM images often deteriorates, especially in areas where sharp topographic features are present. This occurs because the feedback speed of the Z-scanner cannot keep up with the sample height changes during raster scanning. This study presents a simple variable scan speed control strategy for improving AFM imaging speed while maintaining the image quality obtained at low scan speeds. The proposed strategy aims to leverage the similarity in the height profiles between successive scan lines. The topographic information collected from the previous line scan is used to assess the surface complexity and to adjust the scan speed for the following line scan. The AFM system with this variable speed control algorithm was found to reduce the scan time needed for one AFM image by over 50% compared to the fixed-speed scanning while maintaining the similar level of accuracy. The calculated mean square errors (MSEs) show that the combination of speed adjustments and preemptive surface topography prediction has successfully allowed us to suppress the potential oscillations during the speed adjustment process, thereby enhancing the stability of the adaptive AFM system as well. Full article
(This article belongs to the Section Precision Actuators)
Show Figures

Figure 1

12 pages, 6563 KB  
Article
Assessing Image Quality in Multiplexed Sensitivity-Encoding Diffusion-Weighted Imaging with Deep Learning-Based Reconstruction in Bladder MRI
by Seung Ha Cha, Yeo Eun Han, Na Yeon Han, Min Ju Kim, Beom Jin Park, Ki Choon Sim, Deuk Jae Sung, Seulki Yoo, Patricia Lan and Arnaud Guidon
Diagnostics 2025, 15(5), 595; https://doi.org/10.3390/diagnostics15050595 - 28 Feb 2025
Viewed by 972
Abstract
Background/Objectives: This study compared the image quality of conventional multiplexed sensitivity-encoding diffusion-weighted imaging (MUSE-DWI) and deep learning MUSE-DWI with that of vendor-specific deep learning (DL) reconstruction applied to bladder MRI. Methods: This retrospective study included 57 patients with a visible bladder mass. DWI [...] Read more.
Background/Objectives: This study compared the image quality of conventional multiplexed sensitivity-encoding diffusion-weighted imaging (MUSE-DWI) and deep learning MUSE-DWI with that of vendor-specific deep learning (DL) reconstruction applied to bladder MRI. Methods: This retrospective study included 57 patients with a visible bladder mass. DWI images were reconstructed using a vendor-provided DL algorithm (AIRTM Recon DL; GE Healthcare)—a CNN-based algorithm that reduces noise and enhances image quality—applied here as a prototype for MUSE-DWI. Two radiologists independently assessed qualitative features using a 4-point scale. For the quantitative analysis, signal-to-noise ratio (SNR), contrast-to-noise ratio (CNR), signal intensity ratio (SIR), and apparent diffusion coefficient (ADC) of the bladder lesions were recorded by two radiologists. The weighted kappa test and intraclass correlation were used to evaluate the interobserver agreement in the qualitative and quantitative analyses, respectively. Wilcoxon signed-rank test was used to compare the image quality of the two sequences. Results: DL MUSE-DWI demonstrated significantly improved qualitative image quality, with superior sharpness and lesion conspicuity. There were no significant differences in the distortion or artifacts. The qualitative analysis of the images by the two radiologists was in good to excellent agreement (κ ≥ 0.61). Quantitative analysis revealed higher SNR, CNR, and SIR in DL MUSE-DWI than in MUSE-DWI. The ADC values were significantly higher in DL MUSE-DWI. Interobserver agreement was poor (ICC ≤ 0.32) for SNR and CNR and excellent (ICC ≥ 0.85) for SIR and ADC values in both DL MUSE-DWI and MUSE-DWI. Conclusions: DL MUSE-DWI significantly enhanced the image quality in terms of lesion sharpness, conspicuity, SNR, CNR, and SIR, making it a promising tool for clinical imaging. Full article
(This article belongs to the Topic Machine Learning and Deep Learning in Medical Imaging)
Show Figures

Figure 1

16 pages, 4439 KB  
Article
Qualitative and Quantitative Evaluation of a Deep Learning-Based Reconstruction for Accelerated Cardiac Cine Imaging
by Junjie Ma, Xucheng Zhu, Suryanarayanan Kaushik, Eman Ali, Liangliang Li, Kavitha Manickam, Ke Li and Martin A. Janich
Bioengineering 2025, 12(3), 231; https://doi.org/10.3390/bioengineering12030231 - 24 Feb 2025
Viewed by 1580
Abstract
Two-dimensional (2D) cine imaging is essential in routine clinical cardiac MR (CMR) exams for assessing cardiac structure and function. Traditional cine imaging requires patients to hold their breath for extended periods and maintain consistent heartbeats for optimal image quality, which can be challenging [...] Read more.
Two-dimensional (2D) cine imaging is essential in routine clinical cardiac MR (CMR) exams for assessing cardiac structure and function. Traditional cine imaging requires patients to hold their breath for extended periods and maintain consistent heartbeats for optimal image quality, which can be challenging for those with impaired breath-holding capacity or irregular heart rhythms. This study aims to systematically assess the performance of a deep learning-based reconstruction (Sonic DL Cine, GE HealthCare, Waukesha, WI, USA) for accelerated cardiac cine acquisition. Multiple retrospective experiments were designed and conducted to comprehensively evaluate the technique using data from an MR-dedicated extended cardiac torso anatomical phantom (digital phantom) and healthy volunteers on different cardiac planes. Image quality, spatiotemporal sharpness, and biventricular cardiac function were qualitatively and quantitatively compared between Sonic DL Cine-reconstructed images with various accelerations (4-fold to 12-fold) and fully sampled reference images. Both digital phantom and in vivo experiments demonstrate that Sonic DL Cine can accelerate cine acquisitions by up to 12-fold while preserving comparable SNR, contrast, and spatiotemporal sharpness to fully sampled reference images. Measurements of cardiac function metrics indicate that function measurements from Sonic DL Cine-reconstructed images align well with those from fully sampled reference images. In conclusion, this study demonstrates that Sonic DL Cine is able to reconstruct highly under-sampled (up to 12-fold acceleration) cine datasets while preserving SNR, contrast, spatiotemporal sharpness, and quantification accuracy for cardiac function measurements. It also provides a feasible approach for thoroughly evaluating the deep learning-based method. Full article
(This article belongs to the Section Biosignal Processing)
Show Figures

Figure 1

29 pages, 16077 KB  
Article
Traffic Sign Detection and Quality Assessment Using YOLOv8 in Daytime and Nighttime Conditions
by Ziyad N. Aldoski and Csaba Koren
Sensors 2025, 25(4), 1027; https://doi.org/10.3390/s25041027 - 9 Feb 2025
Cited by 2 | Viewed by 1975
Abstract
Traffic safety remains a pressing global concern, with traffic signs playing a vital role in regulating and guiding drivers. However, environmental factors like lighting and weather often compromise their visibility, impacting human drivers and autonomous vehicle (AV) systems. This study addresses critical traffic [...] Read more.
Traffic safety remains a pressing global concern, with traffic signs playing a vital role in regulating and guiding drivers. However, environmental factors like lighting and weather often compromise their visibility, impacting human drivers and autonomous vehicle (AV) systems. This study addresses critical traffic sign detection (TSD) and classification (TSC) gaps by leveraging the YOLOv8 algorithm to evaluate the detection accuracy and sign quality under diverse lighting conditions. The model achieved robust performance metrics across day and night scenarios using the novel ZND dataset, comprising 16,500 labeled images sourced from the GTSRB, GitHub repositories, and real-world own photographs. Complementary retroreflectivity assessments using handheld retroreflectometers revealed correlations between the material properties of the signs and their detection performance, emphasizing the importance of the retroreflective quality, especially under night-time conditions. Additionally, video analysis highlighted the influence of sharpness, brightness, and contrast on detection rates. Human evaluations further provided insights into subjective perceptions of visibility and their relationship with algorithmic detection, underscoring areas for potential improvement. The findings emphasize the need for using various assessment methods, advanced algorithms, enhanced sign materials, and regular maintenance to improve detection reliability and road safety. This research bridges the theoretical and practical aspects of TSD, offering recommendations that could advance AV systems and inform future traffic sign design and evaluation standards. Full article
(This article belongs to the Special Issue Intelligent Traffic Safety and Security)
Show Figures

Figure 1

13 pages, 1650 KB  
Technical Note
Pano-GAN: A Deep Generative Model for Panoramic Dental Radiographs
by Søren Pedersen, Sanyam Jain, Mikkel Chavez, Viktor Ladehoff, Bruna Neves de Freitas and Ruben Pauwels
J. Imaging 2025, 11(2), 41; https://doi.org/10.3390/jimaging11020041 - 2 Feb 2025
Cited by 3 | Viewed by 2531
Abstract
This paper presents the development of a generative adversarial network (GAN) for the generation of synthetic dental panoramic radiographs. While this is an exploratory study, the ultimate aim is to address the scarcity of data in dental research and education. A deep convolutional [...] Read more.
This paper presents the development of a generative adversarial network (GAN) for the generation of synthetic dental panoramic radiographs. While this is an exploratory study, the ultimate aim is to address the scarcity of data in dental research and education. A deep convolutional GAN (DCGAN) with the Wasserstein loss and a gradient penalty (WGAN-GP) was trained on a dataset of 2322 radiographs of varying quality. The focus of this study was on the dentoalveolar part of the radiographs; other structures were cropped out. Significant data cleaning and preprocessing were conducted to standardize the input formats while maintaining anatomical variability. Four candidate models were identified by varying the critic iterations, number of features and the use of denoising prior to training. To assess the quality of the generated images, a clinical expert evaluated a set of generated synthetic radiographs using a ranking system based on visibility and realism, with scores ranging from 1 (very poor) to 5 (excellent). It was found that most generated radiographs showed moderate depictions of dentoalveolar anatomical structures, although they were considerably impaired by artifacts. The mean evaluation scores showed a trade-off between the model trained on non-denoised data, which showed the highest subjective quality for finer structures, such as the mandibular canal and trabecular bone, and one of the models trained on denoised data, which offered better overall image quality, especially in terms of clarity and sharpness and overall realism. These outcomes serve as a foundation for further research into GAN architectures for dental imaging applications. Full article
(This article belongs to the Special Issue Tools and Techniques for Improving Radiological Imaging Applications)
Show Figures

Figure 1

Back to TopTop