Next Issue
Volume 10, October
Previous Issue
Volume 10, August
 
 

J. Imaging, Volume 10, Issue 9 (September 2024) – 33 articles

Cover Story (view full-size image): This study assessed whether an artificial intelligence (AI) system could enhance the detection of breast cancer (BC), achieving earlier or more accurate diagnoses than radiologists in cases of metachronous contralateral BC. Ten patients who had initially received a partial mastectomy and later developed contralateral BC were analyzed. The AI system identified malignancies in six cases (60%). Notably, two cases (20%) were diagnosed solely by the AI system. Additionally, for these cases, the AI system had identified malignancies a year prior to the conventional diagnosis. This study highlights the AI system's effectiveness in diagnosing metachronous contralateral BC via MG. In some cases, the AI system consistently diagnosed cancer earlier than radiological assessments. View this paper
  • Issues are regarded as officially published after their release is announced to the table of contents alert mailing list.
  • You may sign up for e-mail alerts to receive table of contents of newly released issues.
  • PDF is the official format for papers published in both, html and pdf forms. To view the papers in pdf format, click on the "PDF Full-text" link, and use the free Adobe Reader to open them.
Order results
Result details
Section
Select all
Export citation of selected articles as:
10 pages, 5992 KiB  
Article
Comparison of Visual and Quantra Software Mammographic Density Assessment According to BI-RADS® in 2D and 3D Images
by Francesca Morciano, Cristina Marcazzan, Rossella Rella, Oscar Tommasini, Marco Conti, Paolo Belli, Andrea Spagnolo, Andrea Quaglia, Stefano Tambalo, Andreea Georgiana Trisca, Claudia Rossati, Francesca Fornasa and Giovanna Romanucci
J. Imaging 2024, 10(9), 238; https://doi.org/10.3390/jimaging10090238 - 23 Sep 2024
Viewed by 466
Abstract
Mammographic density (MD) assessment is subject to inter- and intra-observer variability. An automated method, such as Quantra software, could be a useful tool for an objective and reproducible MD assessment. Our purpose was to evaluate the performance of Quantra software in assessing MD, [...] Read more.
Mammographic density (MD) assessment is subject to inter- and intra-observer variability. An automated method, such as Quantra software, could be a useful tool for an objective and reproducible MD assessment. Our purpose was to evaluate the performance of Quantra software in assessing MD, according to BI-RADS® Atlas Fifth Edition recommendations, verifying the degree of agreement with the gold standard, given by the consensus of two breast radiologists. A total of 5009 screening examinations were evaluated by two radiologists and analysed by Quantra software to assess MD. The agreement between the three assigned values was expressed as intraclass correlation coefficients (ICCs). The agreement between the software and the two readers (R1 and R2) was moderate with ICC values of 0.725 and 0.713, respectively. A better agreement was demonstrated between the software’s assessment and the average score of the values assigned by the two radiologists, with an index of 0.793, which reflects a good correlation. Quantra software appears a promising tool in supporting radiologists in the MD assessment and could be part of a personalised screening protocol soon. However, some fine-tuning is needed to improve its accuracy, reduce its tendency to overestimate, and ensure it excludes high-density structures from its assessment. Full article
Show Figures

Figure 1

14 pages, 9058 KiB  
Article
Efficient End-to-End Convolutional Architecture for Point-of-Gaze Estimation
by Casian Miron, George Ciubotariu, Alexandru Păsărică and Radu Timofte
J. Imaging 2024, 10(9), 237; https://doi.org/10.3390/jimaging10090237 - 23 Sep 2024
Viewed by 494
Abstract
Point-of-gaze estimation is part of a larger set of tasks aimed at improving user experience, providing business insights, or facilitating interactions with different devices. There has been a growing interest in this task, particularly due to the need for upgrades in e-meeting platforms [...] Read more.
Point-of-gaze estimation is part of a larger set of tasks aimed at improving user experience, providing business insights, or facilitating interactions with different devices. There has been a growing interest in this task, particularly due to the need for upgrades in e-meeting platforms during the pandemic when on-site activities were no longer possible for educational institutions, corporations, and other organizations. Current research advancements are focusing on more complex methodologies for data collection and task implementation, creating a gap that we intend to address with our contributions. Thus, we introduce a methodology for data acquisition that shows promise due to its nonrestrictive and straightforward nature, notably increasing the yield of collected data without compromising diversity or quality. Additionally, we present a novel and efficient convolutional neural network specifically tailored for calibration-free point-of-gaze estimation that outperforms current state-of-the-art methods on the MPIIFaceGaze dataset by a substantial margin, and sets a strong baseline on our own data. Full article
(This article belongs to the Section Computer Vision and Pattern Recognition)
Show Figures

Figure 1

18 pages, 4515 KiB  
Article
Historical Blurry Video-Based Face Recognition
by Lujun Zhai, Suxia Cui, Yonghui Wang, Song Wang, Jun Zhou and Greg Wilsbacher
J. Imaging 2024, 10(9), 236; https://doi.org/10.3390/jimaging10090236 - 20 Sep 2024
Viewed by 439
Abstract
Face recognition is a widely used computer vision, which plays an increasingly important role in user authentication systems, security systems, and consumer electronics. The models for most current applications are based on high-definition digital cameras. In this paper, we focus on digital images [...] Read more.
Face recognition is a widely used computer vision, which plays an increasingly important role in user authentication systems, security systems, and consumer electronics. The models for most current applications are based on high-definition digital cameras. In this paper, we focus on digital images derived from historical motion picture films. Historical motion picture films often have poorer resolution than modern digital imagery, making face detection a more challenging task. To approach this problem, we first propose a trunk–branch concatenated multi-task cascaded convolutional neural network (TB-MTCNN), which efficiently extracts facial features from blurry historical films by combining the trunk with branch networks and employing various sizes of kernels to enrich the multi-scale receptive field. Next, we build a deep neural network-integrated object-tracking algorithm to compensate for failed recognition over one or more video frames. The framework combines simple online and real-time tracking with deep data association (Deep SORT), and TB-MTCNN with the residual neural network (ResNet) model. Finally, a state-of-the-art image restoration method is employed to reduce the effect of noise and blurriness. The experimental results show that our proposed joint face recognition and tracking network can significantly reduce missed recognition in historical motion picture film frames. Full article
(This article belongs to the Section Computer Vision and Pattern Recognition)
Show Figures

Figure 1

14 pages, 2392 KiB  
Article
Convolutional Neural Network–Machine Learning Model: Hybrid Model for Meningioma Tumour and Healthy Brain Classification
by Simona Moldovanu, Gigi Tăbăcaru and Marian Barbu
J. Imaging 2024, 10(9), 235; https://doi.org/10.3390/jimaging10090235 - 20 Sep 2024
Viewed by 552
Abstract
This paper presents a hybrid study of convolutional neural networks (CNNs), machine learning (ML), and transfer learning (TL) in the context of brain magnetic resonance imaging (MRI). The anatomy of the brain is very complex; inside the skull, a brain tumour can form [...] Read more.
This paper presents a hybrid study of convolutional neural networks (CNNs), machine learning (ML), and transfer learning (TL) in the context of brain magnetic resonance imaging (MRI). The anatomy of the brain is very complex; inside the skull, a brain tumour can form in any part. With MRI technology, cross-sectional images are generated, and radiologists can detect the abnormalities. When the size of the tumour is very small, it is undetectable to the human visual system, necessitating alternative analysis using AI tools. As is widely known, CNNs explore the structure of an image and provide features on the SoftMax fully connected (SFC) layer, and the classification of the items that belong to the input classes is established. Two comparison studies for the classification of meningioma tumours and healthy brains are presented in this paper: (i) classifying MRI images using an original CNN and two pre-trained CNNs, DenseNet169 and EfficientNetV2B0; (ii) determining which CNN and ML combination yields the most accurate classification when SoftMax is replaced with three ML models; in this context, Random Forest (RF), K-Nearest Neighbors (KNN), and Support Vector Machine (SVM) were proposed. In a binary classification of tumours and healthy brains, the EfficientNetB0-SVM combination shows an accuracy of 99.5% on the test dataset. A generalisation of the results was performed, and overfitting was prevented by using the bagging ensemble method. Full article
(This article belongs to the Special Issue Learning and Optimization for Medical Imaging)
Show Figures

Figure 1

15 pages, 3754 KiB  
Article
A Multi-Task Model for Pulmonary Nodule Segmentation and Classification
by Tiequn Tang and Rongfu Zhang
J. Imaging 2024, 10(9), 234; https://doi.org/10.3390/jimaging10090234 - 20 Sep 2024
Viewed by 462
Abstract
In the computer-aided diagnosis of lung cancer, the automatic segmentation of pulmonary nodules and the classification of benign and malignant tumors are two fundamental tasks. However, deep learning models often overlook the potential benefits of task correlations in improving their respective performances, as [...] Read more.
In the computer-aided diagnosis of lung cancer, the automatic segmentation of pulmonary nodules and the classification of benign and malignant tumors are two fundamental tasks. However, deep learning models often overlook the potential benefits of task correlations in improving their respective performances, as they are typically designed for a single task only. Therefore, we propose a multi-task network (MT-Net) that integrates shared backbone architecture and a prediction distillation structure for the simultaneous segmentation and classification of pulmonary nodules. The model comprises a coarse segmentation subnetwork (Coarse Seg-net), a cooperative classification subnetwork (Class-net), and a cooperative segmentation subnetwork (Fine Seg-net). Coarse Seg-net and Fine Seg-net share identical structure, where Coarse Seg-net provides prior location information for the subsequent Fine Seg-net and Class-net, thereby boosting pulmonary nodule segmentation and classification performance. We quantitatively and qualitatively analyzed the performance of the model by using the public dataset LIDC-IDRI. Our results show that the model achieves a Dice similarity coefficient (DI) index of 83.2% for pulmonary nodule segmentation, as well as an accuracy (ACC) of 91.9% for benign and malignant pulmonary nodule classification, which is competitive with other state-of-the-art methods. The experimental results demonstrate that the performance of pulmonary nodule segmentation and classification can be improved by a unified model that leverages the potential correlation between tasks. Full article
(This article belongs to the Section Medical Imaging)
Show Figures

Figure 1

18 pages, 59323 KiB  
Article
Method for Augmenting Side-Scan Sonar Seafloor Sediment Image Dataset Based on BCEL1-CBAM-INGAN
by Haixing Xia, Yang Cui, Shaohua Jin, Gang Bian, Wei Zhang and Chengyang Peng
J. Imaging 2024, 10(9), 233; https://doi.org/10.3390/jimaging10090233 - 20 Sep 2024
Viewed by 375
Abstract
In this paper, a method for augmenting samples of side-scan sonar seafloor sediment images based on CBAM-BCEL1-INGAN is proposed, aiming to address the difficulties in acquiring and labeling datasets, as well as the insufficient diversity and quantity of data samples. Firstly, a Convolutional [...] Read more.
In this paper, a method for augmenting samples of side-scan sonar seafloor sediment images based on CBAM-BCEL1-INGAN is proposed, aiming to address the difficulties in acquiring and labeling datasets, as well as the insufficient diversity and quantity of data samples. Firstly, a Convolutional Block Attention Module (CBAM) is integrated into the residual blocks of the INGAN generator to enhance the learning of specific attributes and improve the quality of the generated images. Secondly, a BCEL1 loss function (combining binary cross-entropy and L1 loss functions) is introduced into the discriminator, enabling it to focus on both global image consistency and finer distinctions for better generation results. Finally, augmented samples are input into an AlexNet classifier to verify their authenticity. Experimental results demonstrate the excellent performance of the method in generating images of coarse sand, gravel, and bedrock, as evidenced by significant improvements in the Frechet Inception Distance (FID) and Inception Score (IS). The introduction of the CBAM and BCEL1 loss function notably enhances the quality and details of the generated images. Moreover, classification experiments using the AlexNet classifier show an increase in the recognition rate from 90.5% using only INGAN-generated images of bedrock to 97.3% using images augmented using our method, marking a 6.8% improvement. Additionally, the classification accuracy of bedrock-type matrices is improved by 5.2% when images enhanced using the method presented in this paper are added to the training set, which is 2.7% higher than that of the simple method amplification. This validates the effectiveness of our method in the task of generating seafloor sediment images, partially alleviating the scarcity of side-scan sonar seafloor sediment image data. Full article
(This article belongs to the Section Image and Video Processing)
Show Figures

Figure 1

23 pages, 5832 KiB  
Article
Enhancing Deep Learning Model Explainability in Brain Tumor Datasets Using Post-Heuristic Approaches
by Konstantinos Pasvantis and Eftychios Protopapadakis
J. Imaging 2024, 10(9), 232; https://doi.org/10.3390/jimaging10090232 - 18 Sep 2024
Viewed by 768
Abstract
The application of deep learning models in medical diagnosis has showcased considerable efficacy in recent years. Nevertheless, a notable limitation involves the inherent lack of explainability during decision-making processes. This study addresses such a constraint by enhancing the interpretability robustness. The primary focus [...] Read more.
The application of deep learning models in medical diagnosis has showcased considerable efficacy in recent years. Nevertheless, a notable limitation involves the inherent lack of explainability during decision-making processes. This study addresses such a constraint by enhancing the interpretability robustness. The primary focus is directed towards refining the explanations generated by the LIME Library and LIME image explainer. This is achieved through post-processing mechanisms based on scenario-specific rules. Multiple experiments have been conducted using publicly accessible datasets related to brain tumor detection. Our proposed post-heuristic approach demonstrates significant advancements, yielding more robust and concrete results in the context of medical diagnosis. Full article
Show Figures

Graphical abstract

20 pages, 4626 KiB  
Article
Three-Dimensional Reconstruction of Indoor Scenes Based on Implicit Neural Representation
by Zhaoji Lin, Yutao Huang and Li Yao
J. Imaging 2024, 10(9), 231; https://doi.org/10.3390/jimaging10090231 - 16 Sep 2024
Viewed by 495
Abstract
Reconstructing 3D indoor scenes from 2D images has always been an important task in computer vision and graphics applications. For indoor scenes, traditional 3D reconstruction methods have problems such as missing surface details, poor reconstruction of large plane textures and uneven illumination areas, [...] Read more.
Reconstructing 3D indoor scenes from 2D images has always been an important task in computer vision and graphics applications. For indoor scenes, traditional 3D reconstruction methods have problems such as missing surface details, poor reconstruction of large plane textures and uneven illumination areas, and many wrongly reconstructed floating debris noises in the reconstructed models. This paper proposes a 3D reconstruction method for indoor scenes that combines neural radiation field (NeRFs) and signed distance function (SDF) implicit expressions. The volume density of the NeRF is used to provide geometric information for the SDF field, and the learning of geometric shapes and surfaces is strengthened by adding an adaptive normal prior optimization learning process. It not only preserves the high-quality geometric information of the NeRF, but also uses the SDF to generate an explicit mesh with a smooth surface, significantly improving the reconstruction quality of large plane textures and uneven illumination areas in indoor scenes. At the same time, a new regularization term is designed to constrain the weight distribution, making it an ideal unimodal compact distribution, thereby alleviating the problem of uneven density distribution and achieving the effect of floating debris removal in the final model. Experiments show that the 3D reconstruction effect of this paper on ScanNet, Hypersim, and Replica datasets outperforms the state-of-the-art methods. Full article
(This article belongs to the Special Issue Geometry Reconstruction from Images (2nd Edition))
Show Figures

Figure 1

20 pages, 5213 KiB  
Review
The Role of Cardiovascular Imaging in the Diagnosis of Athlete’s Heart: Navigating the Shades of Grey
by Nima Baba Ali, Sogol Attaripour Esfahani, Isabel G. Scalia, Juan M. Farina, Milagros Pereyra, Timothy Barry, Steven J. Lester, Said Alsidawi, David E. Steidley, Chadi Ayoub, Stefano Palermi and Reza Arsanjani
J. Imaging 2024, 10(9), 230; https://doi.org/10.3390/jimaging10090230 - 14 Sep 2024
Viewed by 955
Abstract
Athlete’s heart (AH) represents the heart’s remarkable ability to adapt structurally and functionally to prolonged and intensive athletic training. Characterized by increased left ventricular (LV) wall thickness, enlarged cardiac chambers, and augmented cardiac mass, AH typically maintains or enhances systolic and diastolic functions. [...] Read more.
Athlete’s heart (AH) represents the heart’s remarkable ability to adapt structurally and functionally to prolonged and intensive athletic training. Characterized by increased left ventricular (LV) wall thickness, enlarged cardiac chambers, and augmented cardiac mass, AH typically maintains or enhances systolic and diastolic functions. Despite the positive health implications, these adaptations can obscure the difference between benign physiological changes and early manifestations of cardiac pathologies such as dilated cardiomyopathy (DCM), hypertrophic cardiomyopathy (HCM), and arrhythmogenic cardiomyopathy (ACM). This article reviews the imaging characteristics of AH across various modalities, emphasizing echocardiography, cardiac magnetic resonance (CMR), and cardiac computed tomography as primary tools for evaluating cardiac function and distinguishing physiological adaptations from pathological conditions. The findings highlight the need for precise diagnostic criteria and advanced imaging techniques to ensure accurate differentiation, preventing misdiagnosis and its associated risks, such as sudden cardiac death (SCD). Understanding these adaptations and employing the appropriate imaging methods are crucial for athletes’ effective management and health optimization. Full article
Show Figures

Figure 1

20 pages, 5653 KiB  
Article
Unleashing the Power of Contrastive Learning for Zero-Shot Video Summarization
by Zongshang Pang, Yuta Nakashima, Mayu Otani and Hajime Nagahara
J. Imaging 2024, 10(9), 229; https://doi.org/10.3390/jimaging10090229 - 14 Sep 2024
Viewed by 437
Abstract
Video summarization aims to select the most informative subset of frames in a video to facilitate efficient video browsing. Past efforts have invariantly involved training summarization models with annotated summaries or heuristic objectives. In this work, we reveal that features pre-trained on image-level [...] Read more.
Video summarization aims to select the most informative subset of frames in a video to facilitate efficient video browsing. Past efforts have invariantly involved training summarization models with annotated summaries or heuristic objectives. In this work, we reveal that features pre-trained on image-level tasks contain rich semantic information that can be readily leveraged to quantify frame-level importance for zero-shot video summarization. Leveraging pre-trained features and contrastive learning, we propose three metrics featuring a desirable keyframe: local dissimilarity, global consistency, and uniqueness. We show that the metrics can well-capture the diversity and representativeness of frames commonly used for the unsupervised generation of video summaries, demonstrating competitive or better performance compared to past methods when no training is needed. We further propose a contrastive learning-based pre-training strategy on unlabeled videos to enhance the quality of the proposed metrics and, thus, improve the evaluated performance on the public benchmarks TVSum and SumMe. Full article
(This article belongs to the Special Issue Deep Learning in Computer Vision)
Show Figures

Figure 1

31 pages, 23384 KiB  
Article
A Hybrid Approach for Image Acquisition Methods Based on Feature-Based Image Registration
by Anchal Kumawat, Sucheta Panda, Vassilis C. Gerogiannis, Andreas Kanavos, Biswaranjan Acharya and Stella Manika
J. Imaging 2024, 10(9), 228; https://doi.org/10.3390/jimaging10090228 - 14 Sep 2024
Viewed by 744
Abstract
This paper presents a novel hybrid approach to feature detection designed specifically for enhancing Feature-Based Image Registration (FBIR). Through an extensive evaluation involving state-of-the-art feature detectors such as BRISK, FAST, ORB, Harris, MinEigen, and MSER, the proposed hybrid detector demonstrates superior performance in [...] Read more.
This paper presents a novel hybrid approach to feature detection designed specifically for enhancing Feature-Based Image Registration (FBIR). Through an extensive evaluation involving state-of-the-art feature detectors such as BRISK, FAST, ORB, Harris, MinEigen, and MSER, the proposed hybrid detector demonstrates superior performance in terms of keypoint detection accuracy and computational efficiency. Three image acquisition methods (i.e., rotation, scene-to-model, and scaling transformations) are considered in the comparison. Applied across a diverse set of remote-sensing images, the proposed hybrid approach has shown marked improvements in match points and match rates, proving its effectiveness in handling varied and complex imaging conditions typical in satellite and aerial imagery. The experimental results have consistently indicated that the hybrid detector outperforms conventional methods, establishing it as a valuable tool for advanced image registration tasks. Full article
(This article belongs to the Section Image and Video Processing)
Show Figures

Figure 1

22 pages, 36676 KiB  
Article
Leveraging Perspective Transformation for Enhanced Pothole Detection in Autonomous Vehicles
by Abdalmalek Abu-raddaha, Zaid A. El-Shair and Samir Rawashdeh
J. Imaging 2024, 10(9), 227; https://doi.org/10.3390/jimaging10090227 - 14 Sep 2024
Viewed by 693
Abstract
Road conditions, often degraded by insufficient maintenance or adverse weather, significantly contribute to accidents, exacerbated by the limited human reaction time to sudden hazards like potholes. Early detection of distant potholes is crucial for timely corrective actions, such as reducing speed or avoiding [...] Read more.
Road conditions, often degraded by insufficient maintenance or adverse weather, significantly contribute to accidents, exacerbated by the limited human reaction time to sudden hazards like potholes. Early detection of distant potholes is crucial for timely corrective actions, such as reducing speed or avoiding obstacles, to mitigate vehicle damage and accidents. This paper introduces a novel approach that utilizes perspective transformation to enhance pothole detection at different distances, focusing particularly on distant potholes. Perspective transformation improves the visibility and clarity of potholes by virtually bringing them closer and enlarging their features, which is particularly beneficial given the fixed-size input requirement of object detection networks, typically significantly smaller than the raw image resolutions captured by cameras. Our method automatically identifies the region of interest (ROI)—the road area—and calculates the corner points to generate a perspective transformation matrix. This matrix is applied to all images and corresponding bounding box labels, enhancing the representation of potholes in the dataset. This approach significantly boosts detection performance when used with YOLOv5-small, achieving a 43% improvement in the average precision (AP) metric at intersection-over-union thresholds of 0.5 to 0.95 for single class evaluation, and notable improvements of 34%, 63%, and 194% for near, medium, and far potholes, respectively, after categorizing them based on their distance. To the best of our knowledge, this work is the first to employ perspective transformation specifically for enhancing the detection of distant potholes. Full article
Show Figures

Figure 1

15 pages, 2952 KiB  
Article
Morphological Changes of the Pituitary Gland in Patients with Irritable Bowel Syndrome Using Magnetic Resonance Imaging
by Jessica Abou Chaaya, Jennifer Abou Chaaya, Batoul Jaafar, Lea Saab, Jad Abou Chaaya, Elie Al Ahmar and Elias Estephan
J. Imaging 2024, 10(9), 226; https://doi.org/10.3390/jimaging10090226 - 13 Sep 2024
Viewed by 379
Abstract
Irritable bowel syndrome (IBS) is a gastrointestinal functional disorder characterized by unclear underlying mechanisms. Several theories propose that hyperactivation of the hypothalamic–pituitary–adrenal (HPA) axis leads to elevated cortisol levels and increased sensitivity of gut wall receptors. Given the absence of prior literature on [...] Read more.
Irritable bowel syndrome (IBS) is a gastrointestinal functional disorder characterized by unclear underlying mechanisms. Several theories propose that hyperactivation of the hypothalamic–pituitary–adrenal (HPA) axis leads to elevated cortisol levels and increased sensitivity of gut wall receptors. Given the absence of prior literature on this topic, our study aimed to investigate the potential for diagnosing IBS based on morphological changes in the pituitary gland, specifically its volume and grayscale intensity. Additionally, we aimed to assess whether factors such as gender, age, and body mass index influence these parameters. This retrospective study involved 60 patients, examining the volume and grayscale characteristics of their pituitary glands in the presence of IBS. Our findings revealed a positive correlation between pituitary gland volume and IBS diagnosis, although no significant correlation was observed with grayscale intensity. Due to the limited existing research and the small sample size of our study, further investigation with a larger cohort is warranted to validate these results. Full article
Show Figures

Figure 1

11 pages, 5641 KiB  
Communication
Altered Movement Coordination during Functional Reach Tasks in Patients with Chronic Low Back Pain and Its Relationship to Numerical Pain Rating Scores
by Susanne M. van der Veen, Christopher R. France and James S. Thomas
J. Imaging 2024, 10(9), 225; https://doi.org/10.3390/jimaging10090225 - 12 Sep 2024
Viewed by 376
Abstract
Identifying the effects of pain catastrophizing on movement patterns in people with chronic low back pain (CLBP) has important clinical implications for treatment approaches. Prior research has shown people with CLBP have decreased lumbar-hip ratios during trunk flexion movements, indicating a decrease in [...] Read more.
Identifying the effects of pain catastrophizing on movement patterns in people with chronic low back pain (CLBP) has important clinical implications for treatment approaches. Prior research has shown people with CLBP have decreased lumbar-hip ratios during trunk flexion movements, indicating a decrease in the contribution of lumbar flexion relative to hip flexion during trunk flexion. In this study, we aim to explore the relationship between pain catastrophizing and movement patterns during trunk flexion in a CLBP population. Participants with CLBP (N = 98, male = 59, age = 39.1 ± 13.0) completed a virtual reality standardized reaching task that necessitated a progressively larger amount of trunk flexion. Specifically, participants reached for four virtual targets to elicit 15°, 30°, 45°, and 60° trunk flexion in the mid-sagittal plane. Lumbar flexion was derived from the motion data. Self-report measures of numerical pain ratings, kinesiophobia, and pain catastrophizing were obtained. Pain catastrophizing leads to decreased lumbar flexion angles during forward reaching. This effect is greater in females than males. Full article
Show Figures

Figure 1

16 pages, 1063 KiB  
Article
Quantitative Evaluation of White Matter Injury by Cranial Ultrasound to Detect the Effects of Parenteral Nutrition in Preterm Babies: An Observational Study
by Gianluigi Laccetta, Maria Chiara De Nardo, Raffaella Cellitti, Maria Di Chiara, Monica Tagliabracci, Pasquale Parisi, Flavia Gloria, Giuseppe Rizzo, Alberto Spalice and Gianluca Terrin
J. Imaging 2024, 10(9), 224; https://doi.org/10.3390/jimaging10090224 - 10 Sep 2024
Viewed by 606
Abstract
Nutrition in early life has an impact on white matter (WM) development in preterm-born babies. Quantitative analysis of pixel brightness intensity (PBI) on cranial ultrasound (CUS) scans has shown a great potential in the evaluation of periventricular WM echogenicity in preterm newborns. We [...] Read more.
Nutrition in early life has an impact on white matter (WM) development in preterm-born babies. Quantitative analysis of pixel brightness intensity (PBI) on cranial ultrasound (CUS) scans has shown a great potential in the evaluation of periventricular WM echogenicity in preterm newborns. We aimed to investigate the employment of this technique to objectively verify the effects of parenteral nutrition (PN) on periventricular WM damage in preterm infants. Prospective observational study including newborns with gestational age at birth ≤32 weeks and/or birth weight ≤1500 g who underwent CUS examination at term-equivalent age. The echogenicity of parieto–occipital periventricular WM relative to that of homolateral choroid plexus (RECP) was calculated on parasagittal scans by means of quantitative analysis of PBI. Its relationship with nutrient intake through enteral and parenteral routes in the first postnatal week was evaluated. The study included 42 neonates for analysis. We demonstrated that energy and protein intake administered through the parenteral route positively correlated with both right and left RECP values (parenteral energy intake vs. right RECP: r = 0.413, p = 0.007; parenteral energy intake vs. left RECP: r = 0.422, p = 0.005; parenteral amino acid intake vs. right RECP: r = 0.438, p = 0.004; parenteral amino acid intake vs. left RECP: r = 0.446, p = 0.003). Multivariate linear regression analysis confirmed these findings. Quantitative assessment of PBI could be considered a simple, risk-free, and repeatable method to investigate the effects of PN on WM development in preterm neonates. Full article
(This article belongs to the Special Issue Progress and Challenges in Biomedical Image Analysis)
Show Figures

Figure 1

11 pages, 3103 KiB  
Article
Peripheral Non-Contrast MR Angiography Using FBI: Scan Time and T2 Blurring Reduction with 2D Parallel Imaging
by Won C. Bae, Lewis Hahn, Vadim Malis, Anya Mesa, Diana Vucevic and Mitsue Miyazaki
J. Imaging 2024, 10(9), 223; https://doi.org/10.3390/jimaging10090223 - 9 Sep 2024
Viewed by 530
Abstract
Non-contrast magnetic resonance angiography (NC-MRA), including fresh blood imaging (FBI), is a suitable choice for evaluating patients with peripheral artery disease (PAD). We evaluated standard FBI (sFBI) and centric ky-kz FBI (cFBI) acquisitions, using 1D and 2D parallel imaging factors (PIFs) to assess [...] Read more.
Non-contrast magnetic resonance angiography (NC-MRA), including fresh blood imaging (FBI), is a suitable choice for evaluating patients with peripheral artery disease (PAD). We evaluated standard FBI (sFBI) and centric ky-kz FBI (cFBI) acquisitions, using 1D and 2D parallel imaging factors (PIFs) to assess the trade-off between scan time and image quality due to blurring. The bilateral legs of four volunteers (mean age 33 years, two females) were imaged in the coronal plane using a body array coil with a posterior spine coil. Two types of sFBI and cFBI sequences with 1D PIF factor 5 in the phase encode (PE) direction (in-plane) and 2D PIF 3 (PE) × 2 (slice encode (SE)) (in-plane, through-slice) were studied. Image quality was evaluated by a radiologist, the vessel’s signal-to-noise ratio (SNR) and contrast-to-noise ratio (CNR) were measured, and major vessel width was measured on the coronal maximum intensity projection (MIP) and 80-degree MIP. Results showed significant time reductions from 184 to 206 s on average when using sFBI down to 98 to 162 s when using cFBI (p = 0.003). Similar SNRs (averaging 200 to 370 across all sequences and PIF) and CNRs (averaging 190 to 360) for all techniques (p > 0.08) were found. There was no significant difference in the image quality (averaging 4.0 to 4.5; p > 0.2) or vessel width (averaging 4.1 to 4.9 mm; p > 0.1) on coronal MIP due to sequence or PIF. However, vessel width measured using 80-degree MIP demonstrated a significantly wider vessel in cFBI (5.6 to 6.8 mm) compared to sFBI (4.5 to 4.7 mm) (p = 0.022), and in 1D (4.7 to 6.8 mm) compared to 2D (4.5 to 5.6 mm) (p < 0.05) PIF. This demonstrated a trade-off in T2 blurring between 1D and 2D PIF: 1D using a PIF of 5 shortened the acquisition window, resulting in sharper arterial blood vessels in coronal images but significant blur in the 80-degree MIP. Two-dimensional PIF for cFBI provided a good balance between shorter scan time (relative to sFBI) and good sharpness in both in- and through-plane, while no benefit of 2D PIF was seen for sFBI. In conclusion, this study demonstrated the usefulness of FBI-based techniques for peripheral artery imaging and underscored the need to strike a balance between scan time and image quality in different planes through the use of 2D parallel imaging. Full article
(This article belongs to the Section Medical Imaging)
Show Figures

Figure 1

15 pages, 4447 KiB  
Article
Spectral Reflectance Estimation from Camera Response Using Local Optimal Dataset and Neural Networks
by Shoji Tominaga and Hideaki Sakai
J. Imaging 2024, 10(9), 222; https://doi.org/10.3390/jimaging10090222 - 9 Sep 2024
Viewed by 427
Abstract
In this study, a novel method is proposed to estimate surface-spectral reflectance from camera responses that combine model-based and training-based approaches. An imaging system is modeled using the spectral sensitivity functions of an RGB camera, spectral power distributions of multiple light sources, unknown [...] Read more.
In this study, a novel method is proposed to estimate surface-spectral reflectance from camera responses that combine model-based and training-based approaches. An imaging system is modeled using the spectral sensitivity functions of an RGB camera, spectral power distributions of multiple light sources, unknown surface-spectral reflectance, additive noise, and a gain parameter. The estimation procedure comprises two main stages: (1) selecting the local optimal reflectance dataset from a reflectance database and (2) determining the best estimate by applying a neural network to the local optimal dataset only. In stage (1), the camera responses are predicted for the respective reflectances in the database, and the optimal candidates are selected in the order of lowest prediction error. In stage (2), most reflectance training data are obtained by a convex linear combination of local optimal data using weighting coefficients based on random numbers. A feed-forward neural network with one hidden layer is used to map the observation space onto the spectral reflectance space. In addition, the reflectance estimation is repeated by generating multiple sets of random numbers, and the median of a set of estimated reflectances is determined as the final estimate of the reflectance. Experimental results show that the estimation accuracies exceed those of other methods. Full article
Show Figures

Figure 1

13 pages, 6949 KiB  
Article
Impact of Display Sub-Pixel Arrays on Perceived Gloss and Transparency
by Midori Tanaka, Kosei Aketagawa and Takahiko Horiuchi
J. Imaging 2024, 10(9), 221; https://doi.org/10.3390/jimaging10090221 - 8 Sep 2024
Viewed by 687
Abstract
In recent years, improvements in display image quality have made it easier to perceive rich object information, such as gloss and transparency, from images, known as shitsukan. Do the different display specifications in the world affect their appearance? Clarifying the effects of differences [...] Read more.
In recent years, improvements in display image quality have made it easier to perceive rich object information, such as gloss and transparency, from images, known as shitsukan. Do the different display specifications in the world affect their appearance? Clarifying the effects of differences in pixel structure on shitsukan perception is necessary to realize shitsukan management for displays with different hardware structures, which has not been fully clarified before. In this study, we experimentally investigated the effects of display pixel arrays on the perception of glossiness and transparency. In a visual evaluation experiment, we investigated the effects of three types of sub-pixel arrays (RGB, RGBW, and PenTile) on the perception of glossiness and transparency using natural images. The results confirmed that sub-pixel arrays affect the appearance of glossiness and transparency. A general relationship of RGB > PenTile > RGBW for glossiness and RGB > RGBW > PenTile for transparency was found; however, detailed analysis, such as cluster analysis, confirmed that the relative superiority of these sub-pixel arrays may vary depending on the observer and image content. Full article
(This article belongs to the Special Issue Color in Image Processing and Computer Vision)
Show Figures

Figure 1

16 pages, 15333 KiB  
Article
Reducing Training Data Using Pre-Trained Foundation Models: A Case Study on Traffic Sign Segmentation Using the Segment Anything Model
by Sofia Henninger, Maximilian Kellner, Benedikt Rombach and Alexander Reiterer
J. Imaging 2024, 10(9), 220; https://doi.org/10.3390/jimaging10090220 - 7 Sep 2024
Viewed by 633
Abstract
The utilization of robust, pre-trained foundation models enables simple adaptation to specific ongoing tasks. In particular, the recently developed Segment Anything Model (SAM) has demonstrated impressive results in the context of semantic segmentation. Recognizing that data collection is generally time-consuming and costly, this [...] Read more.
The utilization of robust, pre-trained foundation models enables simple adaptation to specific ongoing tasks. In particular, the recently developed Segment Anything Model (SAM) has demonstrated impressive results in the context of semantic segmentation. Recognizing that data collection is generally time-consuming and costly, this research aims to determine whether the use of these foundation models can reduce the need for training data. To assess the models’ behavior under conditions of reduced training data, five test datasets for semantic segmentation will be utilized. This study will concentrate on traffic sign segmentation to analyze the results in comparison to Mask R-CNN: the field’s leading model. The findings indicate that SAM does not surpass the leading model for this specific task, regardless of the quantity of training data. Nevertheless, a knowledge-distilled student architecture derived from SAM exhibits no reduction in accuracy when trained on data that have been reduced by 95%. Full article
(This article belongs to the Section Image and Video Processing)
Show Figures

Figure 1

31 pages, 4193 KiB  
Review
Realistic Aspects of Cardiac Ultrasound in Rats: Practical Tips for Improved Examination
by Jessica Silva, Tiago Azevedo, Mário Ginja, Paula A. Oliveira, José Alberto Duarte and Ana I. Faustino-Rocha
J. Imaging 2024, 10(9), 219; https://doi.org/10.3390/jimaging10090219 - 6 Sep 2024
Viewed by 645
Abstract
Echocardiography is a reliable and non-invasive method for assessing cardiac structure and function in both clinical and experimental settings, offering valuable insights into disease progression and treatment efficacy. The successful application of echocardiography in murine models of disease has enabled the evaluation of [...] Read more.
Echocardiography is a reliable and non-invasive method for assessing cardiac structure and function in both clinical and experimental settings, offering valuable insights into disease progression and treatment efficacy. The successful application of echocardiography in murine models of disease has enabled the evaluation of disease severity, drug testing, and continuous monitoring of cardiac function in these animals. However, there is insufficient standardization of echocardiographic measurements for smaller animals. This article aims to address this gap by providing a guide and practical tips for the appropriate acquisition and analysis of echocardiographic parameters in adult rats, which may also be applicable in other small rodents used for scientific purposes, like mice. With advancements in technology, such as ultrahigh-frequency ultrasonic transducers, echocardiography has become a highly sophisticated imaging modality, offering high temporal and spatial resolution imaging, thereby allowing for real-time monitoring of cardiac function throughout the lifespan of small animals. Moreover, it allows the assessment of cardiac complications associated with aging, cancer, diabetes, and obesity, as well as the monitoring of cardiotoxicity induced by therapeutic interventions in preclinical models, providing important information for translational research. Finally, this paper discusses the future directions of cardiac preclinical ultrasound, highlighting the need for continued standardization to advance research and improve clinical outcomes to facilitate early disease detection and the translation of findings into clinical practice. Full article
(This article belongs to the Section Medical Imaging)
Show Figures

Figure 1

13 pages, 1308 KiB  
Article
Decoding Breast Cancer: Using Radiomics to Non-Invasively Unveil Molecular Subtypes Directly from Mammographic Images
by Manon A. G. Bakker, Maria de Lurdes Ovalho, Nuno Matela and Ana M. Mota
J. Imaging 2024, 10(9), 218; https://doi.org/10.3390/jimaging10090218 - 4 Sep 2024
Viewed by 749
Abstract
Breast cancer is the most commonly diagnosed cancer worldwide. The therapy used and its success depend highly on the histology of the tumor. This study aimed to explore the potential of predicting the molecular subtype of breast cancer using radiomic features extracted from [...] Read more.
Breast cancer is the most commonly diagnosed cancer worldwide. The therapy used and its success depend highly on the histology of the tumor. This study aimed to explore the potential of predicting the molecular subtype of breast cancer using radiomic features extracted from screening digital mammography (DM) images. A retrospective study was performed using the OPTIMAM Mammography Image Database (OMI-DB). Four binary classification tasks were performed: luminal A vs. non-luminal A, luminal B vs. non-luminal B, TNBC vs. non-TNBC, and HER2 vs. non-HER2. Feature selection was carried out by Pearson correlation and LASSO. The support vector machine (SVM) and naive Bayes (NB) ML classifiers were used, and their performance was evaluated with the accuracy and the area under the receiver operating characteristic curve (AUC). A total of 186 patients were included in the study: 58 luminal A, 35 luminal B, 52 TNBC, and 41 HER2. The SVM classifier resulted in AUCs during testing of 0.855 for luminal A, 0.812 for luminal B, 0.789 for TNBC, and 0.755 for HER2, respectively. The NB classifier showed AUCs during testing of 0.714 for luminal A, 0.746 for luminal B, 0.593 for TNBC, and 0.714 for HER2. The SVM classifier outperformed NB with statistical significance for luminal A (p = 0.0268) and TNBC (p = 0.0073). Our study showed the potential of radiomics for non-invasive breast cancer subtype classification. Full article
Show Figures

Figure 1

33 pages, 9039 KiB  
Article
Integrated Ultrasound Characterization of the Diet-Induced Obesity (DIO) Model in Young Adult c57bl/6j Mice: Assessment of Cardiovascular, Renal and Hepatic Changes
by Sara Gargiulo, Virginia Barone, Denise Bonente, Tiziana Tamborrino, Giovanni Inzalaco, Lisa Gherardini, Eugenio Bertelli and Mario Chiariello
J. Imaging 2024, 10(9), 217; https://doi.org/10.3390/jimaging10090217 - 4 Sep 2024
Viewed by 780
Abstract
Consuming an unbalanced diet and being overweight represent a global health problem in young people and adults of both sexes, and may lead to metabolic syndrome. The diet-induced obesity (DIO) model in the C57BL/6J mouse substrain that mimics the gradual weight gain in [...] Read more.
Consuming an unbalanced diet and being overweight represent a global health problem in young people and adults of both sexes, and may lead to metabolic syndrome. The diet-induced obesity (DIO) model in the C57BL/6J mouse substrain that mimics the gradual weight gain in humans consuming a “Western-type” (WD) diet is of great interest. This study aims to characterize this animal model, using high-frequency ultrasound imaging (HFUS) as a complementary tool to longitudinally monitor changes in the liver, heart and kidney. Long-term WD feeding increased mice body weight (BW), liver/BW ratio and body condition score (BCS), transaminases, glucose and insulin, and caused dyslipidemia and insulin resistance. Echocardiography revealed subtle cardiac remodeling in WD-fed mice, highlighting a significant age–diet interaction for some left ventricular morphofunctional parameters. Qualitative and parametric HFUS analyses of the liver in WD-fed mice showed a progressive increase in echogenicity and echotexture heterogeneity, and equal or higher brightness of the renal cortex. Furthermore, renal circulation was impaired in WD-fed female mice. The ultrasound and histopathological findings were concordant. Overall, HFUS can improve the translational value of preclinical DIO models through an integrated approach with conventional methods, enabling a comprehensive identification of early stages of diseases in vivo and non-invasively, according to the 3Rs. Full article
Show Figures

Graphical abstract

17 pages, 2270 KiB  
Article
FineTea: A Novel Fine-Grained Action Recognition Video Dataset for Tea Ceremony Actions
by Changwei Ouyang, Yun Yi, Hanli Wang, Jin Zhou and Tao Tian
J. Imaging 2024, 10(9), 216; https://doi.org/10.3390/jimaging10090216 - 31 Aug 2024
Viewed by 708
Abstract
Methods based on deep learning have achieved great success in the field of video action recognition. When these methods are applied to real-world scenarios that require fine-grained analysis of actions, such as being tested on a tea ceremony, limitations may arise. To promote [...] Read more.
Methods based on deep learning have achieved great success in the field of video action recognition. When these methods are applied to real-world scenarios that require fine-grained analysis of actions, such as being tested on a tea ceremony, limitations may arise. To promote the development of fine-grained action recognition, a fine-grained video action dataset is constructed by collecting videos of tea ceremony actions. This dataset includes 2745 video clips. By using a hierarchical fine-grained action classification approach, these clips are divided into 9 basic action classes and 31 fine-grained action subclasses. To better establish a fine-grained temporal model for tea ceremony actions, a method named TSM-ConvNeXt is proposed that integrates a TSM into the high-performance convolutional neural network ConvNeXt. Compared to a baseline method using ResNet50, the experimental performance of TSM-ConvNeXt is improved by 7.31%. Furthermore, compared with the state-of-the-art methods for action recognition on the FineTea and Diving48 datasets, the proposed approach achieves the best experimental results. The FineTea dataset is publicly available. Full article
Show Figures

Figure 1

19 pages, 26310 KiB  
Article
Concrete Crack Detection and Segregation: A Feature Fusion, Crack Isolation, and Explainable AI-Based Approach
by Reshma Ahmed Swarna, Muhammad Minoar Hossain, Mst. Rokeya Khatun, Mohammad Motiur Rahman and Arslan Munir
J. Imaging 2024, 10(9), 215; https://doi.org/10.3390/jimaging10090215 - 31 Aug 2024
Viewed by 814
Abstract
Scientific knowledge of image-based crack detection methods is limited in understanding their performance across diverse crack sizes, types, and environmental conditions. Builders and engineers often face difficulties with image resolution, detecting fine cracks, and differentiating between structural and non-structural issues. Enhanced algorithms and [...] Read more.
Scientific knowledge of image-based crack detection methods is limited in understanding their performance across diverse crack sizes, types, and environmental conditions. Builders and engineers often face difficulties with image resolution, detecting fine cracks, and differentiating between structural and non-structural issues. Enhanced algorithms and analysis techniques are needed for more accurate assessments. Hence, this research aims to generate an intelligent scheme that can recognize the presence of cracks and visualize the percentage of cracks from an image along with an explanation. The proposed method fuses features from concrete surface images through a ResNet-50 convolutional neural network (CNN) and curvelet transform handcrafted (HC) method, optimized by linear discriminant analysis (LDA), and the eXtreme gradient boosting (XGB) classifier then uses these features to recognize cracks. This study evaluates several CNN models, including VGG-16, VGG-19, Inception-V3, and ResNet-50, and various HC techniques, such as wavelet transform, counterlet transform, and curvelet transform for feature extraction. Principal component analysis (PCA) and LDA are assessed for feature optimization. For classification, XGB, random forest (RF), adaptive boosting (AdaBoost), and category boosting (CatBoost) are tested. To isolate and quantify the crack region, this research combines image thresholding, morphological operations, and contour detection with the convex hulls method and forms a novel algorithm. Two explainable AI (XAI) tools, local interpretable model-agnostic explanations (LIMEs) and gradient-weighted class activation mapping++ (Grad-CAM++) are integrated with the proposed method to enhance result clarity. This research introduces a novel feature fusion approach that enhances crack detection accuracy and interpretability. The method demonstrates superior performance by achieving 99.93% and 99.69% accuracy on two existing datasets, outperforming state-of-the-art methods. Additionally, the development of an algorithm for isolating and quantifying crack regions represents a significant advancement in image processing for structural analysis. The proposed approach provides a robust and reliable tool for real-time crack detection and assessment in concrete structures, facilitating timely maintenance and improving structural safety. By offering detailed explanations of the model’s decisions, the research addresses the critical need for transparency in AI applications, thus increasing trust and adoption in engineering practice. Full article
(This article belongs to the Special Issue Image Processing and Computer Vision: Algorithms and Applications)
Show Figures

Figure 1

4 pages, 147 KiB  
Editorial
Editorial for the Special Issue on “Feature Papers in Section AI in Imaging”
by Antonio Fernández-Caballero
J. Imaging 2024, 10(9), 214; https://doi.org/10.3390/jimaging10090214 - 31 Aug 2024
Viewed by 479
Abstract
Artificial intelligence (AI) techniques are being used by the imaging academia and industry to solve a wide range of previously intractable problems [...] Full article
(This article belongs to the Special Issue Feature Papers in Section AI in Imaging)
12 pages, 4159 KiB  
Article
Longitudinal Imaging of Injured Spinal Cord Myelin and White Matter with 3D Ultrashort Echo Time Magnetization Transfer (UTE-MT) and Diffusion MRI
by Qingbo Tang, Yajun Ma, Qun Cheng, Yuanshan Wu, Junyuan Chen, Jiang Du, Pengzhe Lu and Eric Y. Chang
J. Imaging 2024, 10(9), 213; https://doi.org/10.3390/jimaging10090213 - 30 Aug 2024
Viewed by 534
Abstract
Quantitative MRI techniques could be helpful to noninvasively and longitudinally monitor dynamic changes in spinal cord white matter following injury, but imaging and postprocessing techniques in small animals remain lacking. Unilateral C5 hemisection lesions were created in a rat model, and ultrashort echo [...] Read more.
Quantitative MRI techniques could be helpful to noninvasively and longitudinally monitor dynamic changes in spinal cord white matter following injury, but imaging and postprocessing techniques in small animals remain lacking. Unilateral C5 hemisection lesions were created in a rat model, and ultrashort echo time magnetization transfer (UTE-MT) and diffusion-weighted sequences were used for imaging following injury. Magnetization transfer ratio (MTR) measurements and preferential diffusion along the longitudinal axis of the spinal cord were calculated as fractional anisotropy or an apparent diffusion coefficient ratio over transverse directions. The area of myelinated white matter was obtained by thresholding the spinal cord using mean MTR or diffusion ratio values from the contralesional side of the spinal cord. A decrease in white matter areas was observed on the ipsilesional side caudal to the lesions, which is consistent with known myelin and axonal changes following spinal cord injury. The myelinated white matter area obtained through the UTE-MT technique and the white matter area obtained through diffusion imaging techniques showed better performance to distinguish evolution after injury (AUCs > 0.94, p < 0.001) than the mean MTR (AUC = 0.74, p = 0.01) or ADC ratio (AUC = 0.68, p = 0.05) values themselves. Immunostaining for myelin basic protein (MBP) and neurofilament protein NF200 (NF200) showed atrophy and axonal degeneration, confirming the MRI results. These compositional and microstructural MRI techniques may be used to detect demyelination or remyelination in the spinal cord after spinal cord injury. Full article
(This article belongs to the Section Medical Imaging)
Show Figures

Figure 1

16 pages, 2588 KiB  
Article
Development of a Machine Learning Model for the Classification of Enterobius vermicularis Egg
by Natthanai Chaibutr, Pongphan Pongpanitanont, Sakhone Laymanivong, Tongjit Thanchomnang and Penchom Janwan
J. Imaging 2024, 10(9), 212; https://doi.org/10.3390/jimaging10090212 - 28 Aug 2024
Viewed by 600
Abstract
Enterobius vermicularis (pinworm) infections are a significant global health issue, affecting children predominantly in environments like schools and daycares. Traditional diagnosis using the scotch tape technique involves examining E. vermicularis eggs under a microscope. This method is time-consuming and depends heavily on the [...] Read more.
Enterobius vermicularis (pinworm) infections are a significant global health issue, affecting children predominantly in environments like schools and daycares. Traditional diagnosis using the scotch tape technique involves examining E. vermicularis eggs under a microscope. This method is time-consuming and depends heavily on the examiner’s expertise. To improve this, convolutional neural networks (CNNs) have been used to automate the detection of pinworm eggs from microscopic images. In our study, we enhanced E. vermicularis egg detection using a CNN benchmarked against leading models. We digitized and augmented 40,000 images of E. vermicularis eggs (class 1) and artifacts (class 0) for comprehensive training, using an 80:20 training–validation and a five-fold cross-validation. The proposed CNN model showed limited initial performance but achieved 90.0% accuracy, precision, recall, and F1-score after data augmentation. It also demonstrated improved stability with an ROC-AUC metric increase from 0.77 to 0.97. Despite its smaller file size, our CNN model performed comparably to larger models. Notably, the Xception model achieved 99.0% accuracy, precision, recall, and F1-score. These findings highlight the effectiveness of data augmentation and advanced CNN architectures in improving diagnostic accuracy and efficiency for E. vermicularis infections. Full article
(This article belongs to the Section Image and Video Processing)
Show Figures

Graphical abstract

13 pages, 4234 KiB  
Article
AI Use in Mammography for Diagnosing Metachronous Contralateral Breast Cancer
by Mio Adachi, Tomoyuki Fujioka, Toshiyuki Ishiba, Miyako Nara, Sakiko Maruya, Kumiko Hayashi, Yuichi Kumaki, Emi Yamaga, Leona Katsuta, Du Hao, Mikael Hartman, Feng Mengling, Goshi Oda, Kazunori Kubota and Ukihide Tateishi
J. Imaging 2024, 10(9), 211; https://doi.org/10.3390/jimaging10090211 - 28 Aug 2024
Viewed by 615
Abstract
Although several studies have been conducted on artificial intelligence (AI) use in mammography (MG), there is still a paucity of research on the diagnosis of metachronous bilateral breast cancer (BC), which is typically more challenging to diagnose. This study aimed to determine whether [...] Read more.
Although several studies have been conducted on artificial intelligence (AI) use in mammography (MG), there is still a paucity of research on the diagnosis of metachronous bilateral breast cancer (BC), which is typically more challenging to diagnose. This study aimed to determine whether AI could enhance BC detection, achieving earlier or more accurate diagnoses than radiologists in cases of metachronous contralateral BC. We included patients who underwent unilateral BC surgery and subsequently developed contralateral BC. This retrospective study evaluated the AI-supported MG diagnostic system called FxMammo™. We evaluated the capability of FxMammo™ (FathomX Pte Ltd., Singapore) to diagnose BC more accurately or earlier than radiologists’ assessments. This evaluation was supplemented by reviewing MG readings made by radiologists. Out of 1101 patients who underwent surgery, 10 who had initially undergone a partial mastectomy and later developed contralateral BC were analyzed. The AI system identified malignancies in six cases (60%), while radiologists identified five cases (50%). Notably, two cases (20%) were diagnosed solely by the AI system. Additionally, for these cases, the AI system had identified malignancies a year before the conventional diagnosis. This study highlights the AI system’s effectiveness in diagnosing metachronous contralateral BC via MG. In some cases, the AI system consistently diagnosed cancer earlier than radiological assessments. Full article
(This article belongs to the Special Issue AI for Visual Perception and Artificial Consciousness)
Show Figures

Figure 1

29 pages, 4861 KiB  
Article
A New Approach for Effective Retrieval of Medical Images: A Step towards Computer-Assisted Diagnosis
by Suchita Sharma and Ashutosh Aggarwal
J. Imaging 2024, 10(9), 210; https://doi.org/10.3390/jimaging10090210 - 26 Aug 2024
Viewed by 617
Abstract
The biomedical imaging field has grown enormously in the past decade. In the era of digitization, the demand for computer-assisted diagnosis is increasing day by day. The COVID-19 pandemic further emphasized how retrieving meaningful information from medical repositories can aid in improving the [...] Read more.
The biomedical imaging field has grown enormously in the past decade. In the era of digitization, the demand for computer-assisted diagnosis is increasing day by day. The COVID-19 pandemic further emphasized how retrieving meaningful information from medical repositories can aid in improving the quality of patient’s diagnosis. Therefore, content-based retrieval of medical images has a very prominent role in fulfilling our ultimate goal of developing automated computer-assisted diagnosis systems. Therefore, this paper presents a content-based medical image retrieval system that extracts multi-resolution, noise-resistant, rotation-invariant texture features in the form of a novel pattern descriptor, i.e., MsNrRiTxP, from medical images. In the proposed approach, the input medical image is initially decomposed into three neutrosophic images on its transformation into the neutrosophic domain. Afterwards, three distinct pattern descriptors, i.e., MsTrP, NrTxP, and RiTxP, are derived at multiple scales from the three neutrosophic images. The proposed MsNrRiTxP pattern descriptor is obtained by scale-wise concatenation of the joint histograms of MsTrP×RiTxP and NrTxP×RiTxP. To demonstrate the efficacy of the proposed system, medical images of different modalities, i.e., CT and MRI, from four test datasets are considered in our experimental setup. The retrieval performance of the proposed approach is exhaustively compared with several existing, recent, and state-of-the-art local binary pattern-based variants. The retrieval rates obtained by the proposed approach for the noise-free and noisy variants of the test datasets are observed to be substantially higher than the compared ones. Full article
Show Figures

Figure 1

11 pages, 1590 KiB  
Technical Note
Ex Vivo Simultaneous H215O Positron Emission Tomography and Magnetic Resonance Imaging of Porcine Kidneys—A Feasibility Study
by Maibritt Meldgaard Arildsen, Christian Østergaard Mariager, Christoffer Vase Overgaard, Thomas Vorre, Martin Bøjesen, Niels Moeslund, Aage Kristian Olsen Alstrup, Lars Poulsen Tolbod, Mikkel Holm Vendelbo, Steffen Ringgaard, Michael Pedersen and Niels Henrik Buus
J. Imaging 2024, 10(9), 209; https://doi.org/10.3390/jimaging10090209 - 25 Aug 2024
Viewed by 716
Abstract
The aim was to establish combined H215O PET/MRI during ex vivo normothermic machine perfusion (NMP) of isolated porcine kidneys. We examined whether changes in renal arterial blood flow (RABF) are accompanied by changes of a similar magnitude in renal blood [...] Read more.
The aim was to establish combined H215O PET/MRI during ex vivo normothermic machine perfusion (NMP) of isolated porcine kidneys. We examined whether changes in renal arterial blood flow (RABF) are accompanied by changes of a similar magnitude in renal blood perfusion (RBP) as well as the relation between RBP and renal parenchymal oxygenation (RPO). Methods: Pig kidneys (n = 7) were connected to a NMP circuit. PET/MRI was performed at two different pump flow levels: a blood-oxygenation-level-dependent (BOLD) MRI sequence performed simultaneously with a H215O PET sequence for determination of RBP. Results: RBP was measured using H215O PET in all kidneys (flow 1: 0.42–0.76 mL/min/g, flow 2: 0.7–1.6 mL/min/g). We found a linear correlation between changes in delivered blood flow from the perfusion pump and changes in the measured RBP using PET imaging (r2 = 0.87). Conclusion: Our study demonstrated the feasibility of combined H215O PET/MRI during NMP of isolated porcine kidneys with tissue oxygenation being stable over time. The introduction of H215O PET/MRI in nephrological research could be highly relevant for future pre-transplant kidney evaluation and as a tool for studying renal physiology in healthy and diseased kidneys. Full article
(This article belongs to the Section Medical Imaging)
Show Figures

Figure 1

Previous Issue
Next Issue
Back to TopTop