Next Issue
Volume 10, November
Previous Issue
Volume 10, September
 
 

J. Imaging, Volume 10, Issue 10 (October 2024) – 23 articles

Cover Story (view full-size image): Vertebral compression fractures (VCFs) affect 1.4 million patients every year, especially among older populations, leading to increased morbidity and mortality. VCFs are therefore a significant public health concern. Imaging modalities, including radiographs, CTs, MRIs, PET studies, and bone scans, play crucial roles in the diagnosis and management of VCFs. They highlight fracture severity, classification, associated soft tissue injuries, and underlying pathologies, ultimately guiding treatment decisions and predicting long-term outcomes. This article explores the important role of radiology in illuminating the anatomy, pathophysiology, classification, diagnosis, and treatment of patients with VCFs. Advancements in imaging pave the way for further effective management of VCFs. View this paper
  • Issues are regarded as officially published after their release is announced to the table of contents alert mailing list.
  • You may sign up for e-mail alerts to receive table of contents of newly released issues.
  • PDF is the official format for papers published in both, html and pdf forms. To view the papers in pdf format, click on the "PDF Full-text" link, and use the free Adobe Reader to open them.
Order results
Result details
Section
Select all
Export citation of selected articles as:
22 pages, 4066 KiB  
Article
A Specialized Pipeline for Efficient and Reliable 3D Semantic Model Reconstruction of Buildings from Indoor Point Clouds
by Cedrique Fotsing, Willy Carlos Tchuitcheu, Lemopi Isidore Besong, Douglas William Cunningham and Christophe Bobda
J. Imaging 2024, 10(10), 261; https://doi.org/10.3390/jimaging10100261 - 19 Oct 2024
Viewed by 588
Abstract
Recent advances in laser scanning systems have enabled the acquisition of 3D point cloud representations of scenes, revolutionizing the fields of Architecture, Engineering, and Construction (AEC). This paper presents a novel pipeline for the automatic generation of 3D semantic models of multi-level buildings [...] Read more.
Recent advances in laser scanning systems have enabled the acquisition of 3D point cloud representations of scenes, revolutionizing the fields of Architecture, Engineering, and Construction (AEC). This paper presents a novel pipeline for the automatic generation of 3D semantic models of multi-level buildings from indoor point clouds. The architectural components are extracted hierarchically. After segmenting the point clouds into potential building floors, a wall detection process is performed on each floor segment. Then, room, ground, and ceiling extraction are conducted using the walls 2D constellation obtained from the projection of the walls onto the ground plan. The identification of the openings in the walls is performed using a deep learning-based classifier that separates doors and windows from non-consistent holes. Based on the geometric and semantic information from previously detected elements, the final model is generated in IFC format. The effectiveness and reliability of the proposed pipeline are demonstrated through extensive experiments and visual inspections. The results reveal high precision and recall values in the extraction of architectural elements, ensuring the fidelity of the generated models. In addition, the pipeline’s efficiency and accuracy offer valuable contributions to future advancements in point cloud processing. Full article
(This article belongs to the Special Issue Recent Advancements in 3D Imaging)
Show Figures

Figure 1

10 pages, 2530 KiB  
Communication
Quantitative Comparison of Color-Coded Parametric Imaging Technologies Based on Digital Subtraction and Digital Variance Angiography: A Retrospective Observational Study
by István Góg, Péter Sótonyi, Balázs Nemes, János P. Kiss, Krisztián Szigeti, Szabolcs Osváth and Marcell Gyánó
J. Imaging 2024, 10(10), 260; https://doi.org/10.3390/jimaging10100260 - 18 Oct 2024
Viewed by 618
Abstract
The evaluation of hemodynamic conditions in critical limb-threatening ischemia (CLTI) patients is inevitable in endovascular interventions. In this study, the performance of color-coded digital subtraction angiography (ccDSA) and the recently developed color-coded digital variance angiography (ccDVA) was compared in the assessment of key [...] Read more.
The evaluation of hemodynamic conditions in critical limb-threatening ischemia (CLTI) patients is inevitable in endovascular interventions. In this study, the performance of color-coded digital subtraction angiography (ccDSA) and the recently developed color-coded digital variance angiography (ccDVA) was compared in the assessment of key time parameters in lower extremity interventions. The observational study included 19 CLTI patients who underwent peripheral vascular intervention at our institution in 2020. Pre- and post-dilatational images were retrospectively processed and analyzed by a commercially available ccDSA software (Kinepict Medical Imaging Tool 6.0.3; Kinepict Health Ltd., Budapest, Hungary) and by the recently developed ccDVA technology. Two protocols were applied using both a 4 and 7.5 frames per second acquisition rate. Time-to-peak (TTP) parameters were determined in four pre- and poststenotic regions of interest (ROI), and ccDVA values were compared to ccDSA read-outs. The ccDVA technology provided practically the same TTP values as ccDSA (r = 0.99, R2 = 0.98, p < 0.0001). The correlation was extremely high independently of the applied protocol or the position of ROI; the r value was 0.99 (R2 = 0.98, p < 0.0001) in all groups. A similar correlation was observed in the change in passage time (r = 0.98, R2 = 0.96, p < 0.0001). The color-coded DVA technology can reproduce the same hemodynamic data as a commercially available DSA-based software; therefore, it has the potential to be an alternative decision-supporting tool in catheter labs. Full article
(This article belongs to the Special Issue Tools and Techniques for Improving Radiological Imaging Applications)
Show Figures

Figure 1

17 pages, 2685 KiB  
Article
Investigating the Sim-to-Real Generalizability of Deep Learning Object Detection Models
by Joachim Rüter, Umut Durak and Johann C. Dauer
J. Imaging 2024, 10(10), 259; https://doi.org/10.3390/jimaging10100259 - 18 Oct 2024
Viewed by 774
Abstract
State-of-the-art object detection models need large and diverse datasets for training. As these are hard to acquire for many practical applications, training images from simulation environments gain more and more attention. A problem arises as deep learning models trained on simulation images usually [...] Read more.
State-of-the-art object detection models need large and diverse datasets for training. As these are hard to acquire for many practical applications, training images from simulation environments gain more and more attention. A problem arises as deep learning models trained on simulation images usually have problems generalizing to real-world images shown by a sharp performance drop. Definite reasons and influences for this performance drop are not yet found. While previous work mostly investigated the influence of the data as well as the use of domain adaptation, this work provides a novel perspective by investigating the influence of the object detection model itself. Against this background, first, a corresponding measure called sim-to-real generalizability is defined, comprising the capability of an object detection model to generalize from simulation training images to real-world evaluation images. Second, 12 different deep learning-based object detection models are trained and their sim-to-real generalizability is evaluated. The models are trained with a variation of hyperparameters resulting in a total of 144 trained and evaluated versions. The results show a clear influence of the feature extractor and offer further insights and correlations. They open up future research on investigating influences on the sim-to-real generalizability of deep learning-based object detection models as well as on developing feature extractors that have better sim-to-real generalizability capabilities. Full article
(This article belongs to the Special Issue Recent Trends in Computer Vision with Neural Networks)
Show Figures

Figure 1

13 pages, 3427 KiB  
Article
Design and Use of a Custom Phantom for Regular Tests of Radiography Apparatus: A Feasibility Study
by Nikolay Dukov, Vanessa-Mery Valkova, Mariana Yordanova, Virginia Tsapaki and Kristina Bliznakova
J. Imaging 2024, 10(10), 258; https://doi.org/10.3390/jimaging10100258 - 18 Oct 2024
Viewed by 470
Abstract
This study investigates the feasibility of employing an in-house-developed physical phantom dedicated to the weekly quality control testing of radiographic systems, performed by radiographers. For this purpose, a 3D phantom was fabricated, featuring test objects, including a model representing a lesion. Alongside this [...] Read more.
This study investigates the feasibility of employing an in-house-developed physical phantom dedicated to the weekly quality control testing of radiographic systems, performed by radiographers. For this purpose, a 3D phantom was fabricated, featuring test objects, including a model representing a lesion. Alongside this phantom, a commercial phantom, specifically, IBA’s Primus L, was utilized. Weekly imaging of both phantoms was conducted over a span of four weeks, involving different imaging protocols and anode voltages. Subsequently, the obtained data underwent visual evaluation, as well as measurement of the intensity of selected regions of interest. The average values for three incident kilovoltages remained consistently stable over the four weeks, with the exception of the “low energy” case, which exhibited variability during the first week of measurements. Following experiments in “Week 1”, the X-Ray unit was identified as malfunctioning and underwent necessary repairs. The in-house-developed phantom demonstrated its utility in assessing the performance of the X-Ray system. Full article
(This article belongs to the Section Medical Imaging)
Show Figures

Figure 1

13 pages, 2395 KiB  
Article
Differentiation of Benign and Malignant Neck Neoplastic Lesions Using Diffusion-Weighted Magnetic Resonance Imaging
by Omneya Gamaleldin, Giannicola Iannella, Luca Cavalcanti, Salaheldin Desouky, Sherif Shama, Amel Gamaleldin, Yasmine Elwany, Giuseppe Magliulo, Antonio Greco, Annalisa Pace, Armando De Virgilio, Antonino Maniaci, Salvatore Lavalle, Daniela Messineo and Ahmed Bahgat
J. Imaging 2024, 10(10), 257; https://doi.org/10.3390/jimaging10100257 - 18 Oct 2024
Viewed by 489
Abstract
The most difficult diagnostic challenge in neck imaging is the differentiation between benign and malignant neoplasms. The purpose of this work was to study the role of the ADC (apparent diffusion coefficient) value in discriminating benign from malignant neck neoplastic lesions. The study [...] Read more.
The most difficult diagnostic challenge in neck imaging is the differentiation between benign and malignant neoplasms. The purpose of this work was to study the role of the ADC (apparent diffusion coefficient) value in discriminating benign from malignant neck neoplastic lesions. The study was conducted on 53 patients with different neck pathologies (35 malignant and 18 benign/inflammatory). In all of the subjects, conventional MRI (magnetic resonance imaging) sequences were performed apart from DWI (diffusion-weighted imaging). The mean ADC values in the benign and malignant groups were compared using the Mann–Whitney test. The ADCs of malignant lesions (mean 0.86 ± 0.28) were significantly lower than the benign lesions (mean 1.43 ± 0.57), and the mean ADC values of the inflammatory lesions (1.19 ± 0.75) were significantly lower than those of the benign lesions. The cutoff value of 1.1 mm2/s effectively differentiated benign and malignant lesions with a 97.14% sensitivity, a 77.78% specificity, and an 86.2% accuracy. There were also statistically significant differences between the ADC values of different malignant tumors of the neck (p, 0.001). NHL (0.59 ± 0.09) revealed significantly lower ADC values than SCC (0.93 ± 0.15). An ADC cutoff point of 0.7 mm2/s was the best for differentiating NHL (non-Hodgkin lymphoma) from SCC (squamous cell carcinoma); it provided a diagnostic ability of 100.0% sensitivity and 89.47% specificity. ADC mapping may be an effective MRI tool for the differentiation of benign and inflammatory lesions from malignant tumors in the neck. Full article
(This article belongs to the Special Issue Advances in Head and Neck Imaging)
Show Figures

Figure 1

20 pages, 2032 KiB  
Article
CSA-Net: Channel and Spatial Attention-Based Network for Mammogram and Ultrasound Image Classification
by Osama Bin Naeem and Yasir Saleem
J. Imaging 2024, 10(10), 256; https://doi.org/10.3390/jimaging10100256 - 16 Oct 2024
Viewed by 725
Abstract
Breast cancer persists as a critical global health concern, emphasizing the advancement of reliable diagnostic strategies to improve patient survival rates. To address this challenge, a computer-aided diagnostic methodology for breast cancer classification is proposed. An architecture that incorporates a pre-trained EfficientNet-B0 model [...] Read more.
Breast cancer persists as a critical global health concern, emphasizing the advancement of reliable diagnostic strategies to improve patient survival rates. To address this challenge, a computer-aided diagnostic methodology for breast cancer classification is proposed. An architecture that incorporates a pre-trained EfficientNet-B0 model along with channel and spatial attention mechanisms is employed. The efficiency of leveraging attention mechanisms for breast cancer classification is investigated here. The proposed model demonstrates commendable performance in classification tasks, particularly showing significant improvements upon integrating attention mechanisms. Furthermore, this model demonstrates versatility across various imaging modalities, as demonstrated by its robust performance in classifying breast lesions, not only in mammograms but also in ultrasound images during cross-modality evaluation. It has achieved accuracy of 99.9% for binary classification using the mammogram dataset and 92.3% accuracy on the cross-modality multi-class dataset. The experimental results emphasize the superiority of our proposed method over the current state-of-the-art approaches for breast cancer classification. Full article
(This article belongs to the Section Medical Imaging)
Show Figures

Figure 1

31 pages, 4535 KiB  
Article
Prediction of Attention Groups and Big Five Personality Traits from Gaze Features Collected from an Outlier Search Game
by Rachid Rhyad Saboundji, Kinga Bettina Faragó and Violetta Firyaridi
J. Imaging 2024, 10(10), 255; https://doi.org/10.3390/jimaging10100255 - 16 Oct 2024
Viewed by 530
Abstract
This study explores the intersection of personality, attention and task performance in traditional 2D and immersive virtual reality (VR) environments. A visual search task was developed that required participants to find anomalous images embedded in normal background images in 3D space. Experiments were [...] Read more.
This study explores the intersection of personality, attention and task performance in traditional 2D and immersive virtual reality (VR) environments. A visual search task was developed that required participants to find anomalous images embedded in normal background images in 3D space. Experiments were conducted with 30 subjects who performed the task in 2D and VR environments while their eye movements were tracked. Following an exploratory correlation analysis, we applied machine learning techniques to investigate the predictive power of gaze features on human data derived from different data collection methods. Our proposed methodology consists of a pipeline of steps for extracting fixation and saccade features from raw gaze data and training machine learning models to classify the Big Five personality traits and attention-related processing speed/accuracy levels computed from the Group Bourdon test. The models achieved above-chance predictive performance in both 2D and VR settings despite visually complex 3D stimuli. We also explored further relationships between task performance, personality traits and attention characteristics. Full article
Show Figures

Figure 1

25 pages, 830 KiB  
Review
Current Status and Challenges and Future Trends of Deep Learning-Based Intrusion Detection Models
by Yuqiang Wu, Bailin Zou and Yifei Cao
J. Imaging 2024, 10(10), 254; https://doi.org/10.3390/jimaging10100254 - 14 Oct 2024
Viewed by 1186
Abstract
With the advancement of deep learning (DL) technology, DL-based intrusion detection models have emerged as a focal point of research within the domain of cybersecurity. This paper provides an overview of the datasets frequently utilized in the research. This article presents an overview [...] Read more.
With the advancement of deep learning (DL) technology, DL-based intrusion detection models have emerged as a focal point of research within the domain of cybersecurity. This paper provides an overview of the datasets frequently utilized in the research. This article presents an overview of the widely utilized datasets in the research, establishing a basis for future investigation and analysis. The text subsequently summarizes the prevalent data preprocessing methods and feature engineering techniques utilized in intrusion detection. Following this, it provides a review of seven deep learning-based intrusion detection models, namely, deep autoencoders, deep belief networks, deep neural networks, convolutional neural networks, recurrent neural networks, generative adversarial networks, and transformers. Each model is examined from various dimensions, highlighting their unique architectures and applications within the context of cybersecurity. Furthermore, this paper broadens its scope to include intrusion detection techniques facilitated by the following two large-scale predictive models: the BERT series and the GPT series. These models, leveraging the power of transformers and attention mechanisms, have demonstrated remarkable capabilities in understanding and processing sequential data. In light of these findings, this paper concludes with a prospective outlook on future research directions. Four key areas have been identified for further research. By addressing these issues and advancing research in the aforementioned areas, this paper envisions a future in which DL-based intrusion detection systems are not only more accurate and efficient but also better aligned with the dynamic and evolving landscape of cybersecurity threats. Full article
Show Figures

Figure 1

19 pages, 3486 KiB  
Article
Clinician and Visitor Activity Patterns in an Intensive Care Unit Room: A Study to Examine How Ambient Monitoring Can Inform the Measurement of Delirium Severity and Escalation of Care
by Keivan Nalaie, Vitaly Herasevich, Laura M. Heier, Brian W. Pickering, Daniel Diedrich and Heidi Lindroth
J. Imaging 2024, 10(10), 253; https://doi.org/10.3390/jimaging10100253 - 14 Oct 2024
Viewed by 612
Abstract
The early detection of the acute deterioration of escalating illness severity is crucial for effective patient management and can significantly impact patient outcomes. Ambient sensing technology, such as computer vision, may provide real-time information that could impact early recognition and response. This study [...] Read more.
The early detection of the acute deterioration of escalating illness severity is crucial for effective patient management and can significantly impact patient outcomes. Ambient sensing technology, such as computer vision, may provide real-time information that could impact early recognition and response. This study aimed to develop a computer vision model to quantify the number and type (clinician vs. visitor) of people in an intensive care unit (ICU) room, study the trajectory of their movement, and preliminarily explore its relationship with delirium as a marker of illness severity. To quantify the number of people present, we implemented a counting-by-detection supervised strategy using images from ICU rooms. This was accomplished through developing three methods: single-frame, multi-frame, and tracking-to-count. We then explored how the type of person and distribution in the room corresponded to the presence of delirium. Our designed pipeline was tested with a different set of detection models. We report model performance statistics and preliminary insights into the relationship between the number and type of persons in the ICU room and delirium. We evaluated our method and compared it with other approaches, including density estimation, counting by detection, regression methods, and their adaptability to ICU environments. Full article
Show Figures

Figure 1

23 pages, 4511 KiB  
Review
Image Analysis in Histopathology and Cytopathology: From Early Days to Current Perspectives
by Tibor Mezei, Melinda Kolcsár, András Joó and Simona Gurzu
J. Imaging 2024, 10(10), 252; https://doi.org/10.3390/jimaging10100252 - 14 Oct 2024
Viewed by 1017
Abstract
Both pathology and cytopathology still rely on recognizing microscopical morphologic features, and image analysis plays a crucial role, enabling the identification, categorization, and characterization of different tissue types, cell populations, and disease states within microscopic images. Historically, manual methods have been the primary [...] Read more.
Both pathology and cytopathology still rely on recognizing microscopical morphologic features, and image analysis plays a crucial role, enabling the identification, categorization, and characterization of different tissue types, cell populations, and disease states within microscopic images. Historically, manual methods have been the primary approach, relying on expert knowledge and experience of pathologists to interpret microscopic tissue samples. Early image analysis methods were often constrained by computational power and the complexity of biological samples. The advent of computers and digital imaging technologies challenged the exclusivity of human eye vision and brain computational skills, transforming the diagnostic process in these fields. The increasing digitization of pathological images has led to the application of more objective and efficient computer-aided analysis techniques. Significant advancements were brought about by the integration of digital pathology, machine learning, and advanced imaging technologies. The continuous progress in machine learning and the increasing availability of digital pathology data offer exciting opportunities for the future. Furthermore, artificial intelligence has revolutionized this field, enabling predictive models that assist in diagnostic decision making. The future of pathology and cytopathology is predicted to be marked by advancements in computer-aided image analysis. The future of image analysis is promising, and the increasing availability of digital pathology data will invariably lead to enhanced diagnostic accuracy and improved prognostic predictions that shape personalized treatment strategies, ultimately leading to better patient outcomes. Full article
(This article belongs to the Special Issue New Perspectives in Medical Image Analysis)
Show Figures

Figure 1

18 pages, 7006 KiB  
Article
Searching Method for Three-Dimensional Puncture Route to Support Computed Tomography-Guided Percutaneous Puncture
by Yusuke Gotoh, Aoi Takeda, Koji Masui, Koji Sakai and Manato Fujimoto
J. Imaging 2024, 10(10), 251; https://doi.org/10.3390/jimaging10100251 - 14 Oct 2024
Viewed by 630
Abstract
In CT-guided percutaneous punctures—an image-guided puncture method using CT images—physicians treat targets such as lung tumors, liver tumors, renal tumors, and intervertebral abscesses by inserting a puncture needle into the body from the exterior while viewing images. By recognizing two-dimensional CT images prior [...] Read more.
In CT-guided percutaneous punctures—an image-guided puncture method using CT images—physicians treat targets such as lung tumors, liver tumors, renal tumors, and intervertebral abscesses by inserting a puncture needle into the body from the exterior while viewing images. By recognizing two-dimensional CT images prior to a procedure, a physician determines the least invasive puncture route for the patient. Therefore, the candidate puncture route is limited to a two-dimensional region along the cross section of the human body. In this paper, we aim to construct a three-dimensional puncture space based on multiple two-dimensional CT images to search for a safer and shorter puncture route for a given patient. If all puncture routes starting from a target in the three-dimensional space were examined from all directions (the brute-force method), the processing time to derive the puncture route would be very long. We propose a more efficient method for three-dimensional puncture route selection in CT-guided percutaneous punctures. The proposed method extends the ray-tracing method, which quickly derives a line segment from a given start point to an end point on a two-dimensional plane, and applies it to three-dimensional space. During actual puncture route selection, a physician can use CT images to derive a three-dimensional puncture route that is safe for the patient and minimizes the puncture time. The main novelty is that we propose a method for deriving a three-dimensional puncture route within the allowed time in an actual puncture. The main goal is for physicians to select the puncture route they will use in the actual surgery from among the multiple three-dimensional puncture route candidates derived using the proposed method. The proposed method derives a three-dimensional puncture route within the allowed time in an actual puncture. Physicians can use the proposed method to derive a new puncture route, reducing the burden on patients and improving physician skills. In the evaluation results of a computer simulation, for a 3D CT image created by combining 170 two-dimensional CT images, the processing time for deriving the puncture route using the proposed method was approximately 59.4 s. The shortest length of the puncture route from the starting point to the target was between 20 mm and 22 mm. The search time for a three-dimensional human body consisting of 15 CT images was 4.77 s for the proposed method and 2599.0 s for a brute-force method. In a questionnaire, physicians who actually perform puncture treatments evaluated the candidate puncture routes derived by the proposed method. We confirmed that physicians could actually use these candidates as a puncture route. Full article
(This article belongs to the Section Medical Imaging)
Show Figures

Figure 1

19 pages, 714 KiB  
Article
Enhanced COVID-19 Detection from X-ray Images with Convolutional Neural Network and Transfer Learning
by Qanita Bani Baker, Mahmoud Hammad, Mohammed Al-Smadi, Heba Al-Jarrah, Rahaf Al-Hamouri and Sa’ad A. Al-Zboon
J. Imaging 2024, 10(10), 250; https://doi.org/10.3390/jimaging10100250 - 13 Oct 2024
Viewed by 978
Abstract
The global spread of Coronavirus (COVID-19) has prompted imperative research into scalable and effective detection methods to curb its outbreak. The early diagnosis of COVID-19 patients has emerged as a pivotal strategy in mitigating the spread of the disease. Automated COVID-19 detection using [...] Read more.
The global spread of Coronavirus (COVID-19) has prompted imperative research into scalable and effective detection methods to curb its outbreak. The early diagnosis of COVID-19 patients has emerged as a pivotal strategy in mitigating the spread of the disease. Automated COVID-19 detection using Chest X-ray (CXR) imaging has significant potential for facilitating large-scale screening and epidemic control efforts. This paper introduces a novel approach that employs state-of-the-art Convolutional Neural Network models (CNNs) for accurate COVID-19 detection. The employed datasets each comprised 15,000 X-ray images. We addressed both binary (Normal vs. Abnormal) and multi-class (Normal, COVID-19, Pneumonia) classification tasks. Comprehensive evaluations were performed by utilizing six distinct CNN-based models (Xception, Inception-V3, ResNet50, VGG19, DenseNet201, and InceptionResNet-V2) for both tasks. As a result, the Xception model demonstrated exceptional performance, achieving 98.13% accuracy, 98.14% precision, 97.65% recall, and a 97.89% F1-score in binary classification, while in multi-classification it yielded 87.73% accuracy, 90.20% precision, 87.73% recall, and an 87.49% F1-score. Moreover, the other utilized models, such as ResNet50, demonstrated competitive performance compared with many recent works. Full article
(This article belongs to the Section Medical Imaging)
Show Figures

Figure 1

24 pages, 2190 KiB  
Article
Variable Splitting and Fusing for Image Phase Retrieval
by Petros Nyfantis, Pablo Ruiz Mataran, Hector Nistazakis, George Tombras and Aggelos K. Katsaggelos
J. Imaging 2024, 10(10), 249; https://doi.org/10.3390/jimaging10100249 - 12 Oct 2024
Viewed by 584
Abstract
Phase Retrieval is defined as the recovery of a signal when only the intensity of its Fourier Transform is known. It is a non-linear and non-convex optimization problem with a multitude of applications including X-ray crystallography, microscopy and blind deconvolution. In this study, [...] Read more.
Phase Retrieval is defined as the recovery of a signal when only the intensity of its Fourier Transform is known. It is a non-linear and non-convex optimization problem with a multitude of applications including X-ray crystallography, microscopy and blind deconvolution. In this study, we address the problem of Phase Retrieval from the perspective of variable splitting and alternating minimization for real signals and seek to develop algorithms with improved convergence properties. An exploration of the underlying geometric relations led to the conceptualization of an algorithmic step aiming to refine the estimate at each iteration via recombination of the separated variables. Following this, a theoretical analysis to study the convergence properties of the proposed method and justify the inclusion of the recombination step was developed. Our experiments showed that the proposed method converges substantially faster compared to other state-of-the-art analytical methods while demonstrating equivalent or superior performance in terms of quality of reconstruction and ability to converge under various setups. Full article
(This article belongs to the Section Image and Video Processing)
Show Figures

Figure 1

14 pages, 2445 KiB  
Article
Enhanced Self-Checkout System for Retail Based on Improved YOLOv10
by Lianghao Tan, Shubing Liu, Jing Gao, Xiaoyi Liu, Linyue Chu and Huangqi Jiang
J. Imaging 2024, 10(10), 248; https://doi.org/10.3390/jimaging10100248 - 10 Oct 2024
Viewed by 1060
Abstract
With the rapid advancement of deep learning technologies, computer vision has shown immense potential in retail automation. This paper presents a novel self-checkout system for retail based on an improved YOLOv10 network, aimed at enhancing checkout efficiency and reducing labor costs. We propose [...] Read more.
With the rapid advancement of deep learning technologies, computer vision has shown immense potential in retail automation. This paper presents a novel self-checkout system for retail based on an improved YOLOv10 network, aimed at enhancing checkout efficiency and reducing labor costs. We propose targeted optimizations for the YOLOv10 model, incorporating the detection head structure from YOLOv8, which significantly improves product recognition accuracy. Additionally, we develop a post-processing algorithm tailored for self-checkout scenarios, to further enhance the application of the system. Experimental results demonstrate that our system outperforms existing methods in both product recognition accuracy and checkout speed. This research not only provides a new technical solution for retail automation but offers valuable insights into optimizing deep learning models for real-world applications. Full article
Show Figures

Figure 1

9 pages, 1762 KiB  
Communication
Addressing Once More the (Im)possibility of Color Reconstruction in Underwater Images
by Yuri Rzhanov and Kim Lowell
J. Imaging 2024, 10(10), 247; https://doi.org/10.3390/jimaging10100247 - 8 Oct 2024
Viewed by 570
Abstract
Color is an important cue in object recognition and classification problems. In underwater imagery, colors undergo strong distortion due to light propagation through an absorbing and scattering medium. Distortions depend on a number of complex phenomena, the most important being wavelength-dependent absorption and [...] Read more.
Color is an important cue in object recognition and classification problems. In underwater imagery, colors undergo strong distortion due to light propagation through an absorbing and scattering medium. Distortions depend on a number of complex phenomena, the most important being wavelength-dependent absorption and the sensitivity of sensors in trichromatic cameras. It has been shown previously that unique reconstruction in this case is not possible—at least for a simplified image formation model. In this paper, the authors use numerical simulations to demonstrate that this statement also holds for the underwater image formation model that is currently the most sophisticated. Full article
(This article belongs to the Special Issue Underwater Imaging (2nd Edition))
Show Figures

Figure 1

15 pages, 15447 KiB  
Article
Deep Learning for Generating Time-of-Flight Camera Artifacts
by Tobias Müller, Tobias Schmähling, Stefan Elser and Jörg Eberhardt
J. Imaging 2024, 10(10), 246; https://doi.org/10.3390/jimaging10100246 - 8 Oct 2024
Viewed by 585
Abstract
Time-of-Flight (ToF) cameras are subject to high levels of noise and errors due to Multi-Path-Interference (MPI). To correct these errors, algorithms and neuronal networks require training data. However, the limited availability of real data has led to the use of physically simulated data, [...] Read more.
Time-of-Flight (ToF) cameras are subject to high levels of noise and errors due to Multi-Path-Interference (MPI). To correct these errors, algorithms and neuronal networks require training data. However, the limited availability of real data has led to the use of physically simulated data, which often involves simplifications and computational constraints. The simulation of such sensors is an essential building block for hardware design and application development. Therefore, the simulation data must capture the major sensor characteristics. This work presents a learning-based approach that leverages high-quality laser scan data to generate realistic ToF camera data. The proposed method employs MCW-Net (Multi-Level Connection and Wide Regional Non-Local Block Network) for domain transfer, transforming laser scan data into the ToF camera domain. Different training variations are explored using a real-world dataset. Additionally, a noise model is introduced to compensate for the lack of noise in the initial step. The effectiveness of the method is evaluated on reference scenes to quantitatively compare to physically simulated data. Full article
Show Figures

Figure 1

23 pages, 2595 KiB  
Article
Joint Image Processing with Learning-Driven Data Representation and Model Behavior for Non-Intrusive Anemia Diagnosis in Pediatric Patients
by Tarek Berghout
J. Imaging 2024, 10(10), 245; https://doi.org/10.3390/jimaging10100245 - 2 Oct 2024
Cited by 1 | Viewed by 895
Abstract
Anemia diagnosis is crucial for pediatric patients due to its impact on growth and development. Traditional methods, like blood tests, are effective but pose challenges, such as discomfort, infection risk, and frequent monitoring difficulties, underscoring the need for non-intrusive diagnostic methods. In light [...] Read more.
Anemia diagnosis is crucial for pediatric patients due to its impact on growth and development. Traditional methods, like blood tests, are effective but pose challenges, such as discomfort, infection risk, and frequent monitoring difficulties, underscoring the need for non-intrusive diagnostic methods. In light of this, this study proposes a novel method that combines image processing with learning-driven data representation and model behavior for non-intrusive anemia diagnosis in pediatric patients. The contributions of this study are threefold. First, it uses an image-processing pipeline to extract 181 features from 13 categories, with a feature-selection process identifying the most crucial data for learning. Second, a deep multilayered network based on long short-term memory (LSTM) is utilized to train a model for classifying images into anemic and non-anemic cases, where hyperparameters are optimized using Bayesian approaches. Third, the trained LSTM model is integrated as a layer into a learning model developed based on recurrent expansion rules, forming a part of a new deep network called a recurrent expansion network (RexNet). RexNet is designed to learn data representations akin to traditional deep-learning methods while also understanding the interaction between dependent and independent variables. The proposed approach is applied to three public datasets, namely conjunctival eye images, palmar images, and fingernail images of children aged up to 6 years. RexNet achieves an overall evaluation of 99.83 ± 0.02% across all classification metrics, demonstrating significant improvements in diagnostic results and generalization compared to LSTM networks and existing methods. This highlights RexNet’s potential as a promising alternative to traditional blood-based methods for non-intrusive anemia diagnosis. Full article
Show Figures

Figure 1

20 pages, 5589 KiB  
Review
Radiological Diagnosis and Advances in Imaging of Vertebral Compression Fractures
by Kathleen H. Miao, Julia H. Miao, Puneet Belani, Etan Dayan, Timothy A. Carlon, Turgut Bora Cengiz and Mark Finkelstein
J. Imaging 2024, 10(10), 244; https://doi.org/10.3390/jimaging10100244 - 28 Sep 2024
Viewed by 1228
Abstract
Vertebral compression fractures (VCFs) affect 1.4 million patients every year, especially among the globally aging population, leading to increased morbidity and mortality. Often characterized with symptoms of sudden onset back pain, decreased vertebral height, progressive kyphosis, and limited mobility, VCFs can significantly impact [...] Read more.
Vertebral compression fractures (VCFs) affect 1.4 million patients every year, especially among the globally aging population, leading to increased morbidity and mortality. Often characterized with symptoms of sudden onset back pain, decreased vertebral height, progressive kyphosis, and limited mobility, VCFs can significantly impact a patient’s quality of life and are a significant public health concern. Imaging modalities in radiology, including radiographs, computed tomography (CT), magnetic resonance imaging (MRI), and positron emission tomography (PET) studies and bone scans, play crucial and evolving roles in the diagnosis, assessment, and management of VCFs. An understanding of anatomy, and the extent to which each imaging modality serves to elucidate that anatomy, is crucial in understanding and providing guidance on fracture severity, classification, associated soft tissue injuries, underlying pathologies, and bone mineral density, ultimately guiding treatment decisions, monitoring treatment response, and predicting prognosis and long-term outcomes. This article thus explores the important role of radiology in illuminating the underlying anatomy and pathophysiology, classification, diagnosis, treatment, and management of patients with VCFs. Continued research and advancements in imaging technologies will further enhance our understanding of VCFs and pave the way for personalized and effective management strategies. Full article
(This article belongs to the Special Issue New Perspectives in Medical Image Analysis)
Show Figures

Figure 1

16 pages, 970 KiB  
Review
Overview of High-Dynamic-Range Image Quality Assessment
by Yue Liu, Yu Tian, Shiqi Wang, Xinfeng Zhang and Sam Kwong
J. Imaging 2024, 10(10), 243; https://doi.org/10.3390/jimaging10100243 - 27 Sep 2024
Viewed by 744
Abstract
In recent years, the High-Dynamic-Range (HDR) image has gained widespread popularity across various domains, such as the security, multimedia, and biomedical fields, owing to its ability to deliver an authentic visual experience. However, the extensive dynamic range and rich detail in HDR images [...] Read more.
In recent years, the High-Dynamic-Range (HDR) image has gained widespread popularity across various domains, such as the security, multimedia, and biomedical fields, owing to its ability to deliver an authentic visual experience. However, the extensive dynamic range and rich detail in HDR images present challenges in assessing their quality. Therefore, current efforts involve constructing subjective databases and proposing objective quality assessment metrics to achieve an efficient HDR Image Quality Assessment (IQA). Recognizing the absence of a systematic overview of these approaches, this paper provides a comprehensive survey of both subjective and objective HDR IQA methods. Specifically, we review 7 subjective HDR IQA databases and 12 objective HDR IQA metrics. In addition, we conduct a statistical analysis of 9 IQA algorithms, incorporating 3 perceptual mapping functions. Our findings highlight two main areas for improvement. Firstly, the size and diversity of HDR IQA subjective databases should be significantly increased, encompassing a broader range of distortion types. Secondly, objective quality assessment algorithms need to identify more generalizable perceptual mapping approaches and feature extraction methods to enhance their robustness and applicability. Furthermore, this paper aims to serve as a valuable resource for researchers by discussing the limitations of current methodologies and potential research directions in the future. Full article
(This article belongs to the Special Issue Novel Approaches to Image Quality Assessment)
Show Figures

Figure 1

15 pages, 1317 KiB  
Article
The Role of Plain Radiography in Assessing Aborted Foetal Musculoskeletal Anomalies in Everyday Practice
by Benedetta Rossini, Aldo Carnevale, Gian Carlo Parenti, Silvia Zago, Guendalina Sigolo and Francesco Feletti
J. Imaging 2024, 10(10), 242; https://doi.org/10.3390/jimaging10100242 - 27 Sep 2024
Viewed by 493
Abstract
Conventional radiography is widely used for postmortem foetal imaging, but its role in diagnosing congenital anomalies is debated. This study aimed to assess the effectiveness of X-rays in detecting skeletal abnormalities and guiding genetic analysis and counselling. This is a retrospective analysis of [...] Read more.
Conventional radiography is widely used for postmortem foetal imaging, but its role in diagnosing congenital anomalies is debated. This study aimed to assess the effectiveness of X-rays in detecting skeletal abnormalities and guiding genetic analysis and counselling. This is a retrospective analysis of all post-abortion diagnostic imaging studies conducted at a centre serving a population of over 300,000 inhabitants from 2008 to 2023. The data were analysed using descriptive statistics. X-rays of 81 aborted foetuses (total of 308 projections; mean: 3.8 projections/examination; SD: 1.79) were included. We detected 137 skeletal anomalies. In seven cases (12.7%), skeletal anomalies identified through radiology were missed by prenatal sonography. The autopsy confirmed radiological data in all cases except for two radiological false positives. Additionally, radiology failed to identify a case of syndactyly, which was revealed by anatomopathology. X-ray is crucial for accurately classifying skeletal abnormalities, determining the causes of spontaneous abortion, and guiding the request for genetic counselling. Formal training for both technicians and radiologists, as well as multidisciplinary teamwork, is necessary to perform X-ray examinations on aborted foetuses and interpret the results effectively. Full article
Show Figures

Figure 1

10 pages, 748 KiB  
Article
Examination of Joint Effusion Magnetic Resonance Imaging of Patients with Temporomandibular Disorders with Disc Displacement
by Fumi Mizuhashi, Ichiro Ogura, Ryo Mizuhashi, Yuko Watarai, Makoto Oohashi, Tatsuhiro Suzuki, Momoka Kawana and Kotono Nagata
J. Imaging 2024, 10(10), 241; https://doi.org/10.3390/jimaging10100241 - 27 Sep 2024
Viewed by 467
Abstract
In this study, we investigated joint effusion in patients with temporomandibular disorders (TMDs) with disc displacement. The magnetic resonance (MR) images of 97 temporomandibular joints (TMJs) were evaluated, and the appearance of joint effusion was investigated. Myofascial pain and TMJ pain were considered [...] Read more.
In this study, we investigated joint effusion in patients with temporomandibular disorders (TMDs) with disc displacement. The magnetic resonance (MR) images of 97 temporomandibular joints (TMJs) were evaluated, and the appearance of joint effusion was investigated. Myofascial pain and TMJ pain were considered in addition to the duration from manifestation. Disc displacement with and without reduction, as well as the region and the area of joint effusion, were investigated using the MR images. Fisher’s test was used for the analyses. Joint effusion was recognized in 70 TMJs, including 55 in the superior articular cavity, 1 in the inferior articular cavity, and 14 in both the superior and inferior articular cavities. The appearance of joint effusion did not differ with the existence of myofascial pain or TMJ pain. The region of joint effusion did not differ between disc displacement with and without reduction. A larger area of joint effusion was recognized in disc displacement without reduction (p < 0.05). The results showed that the amount of synovial fluid in the joint effusion did not change with the existence of myofascial pain or TMJ pain. Joint effusion commonly appeared in disc displacement without reduction. Full article
(This article belongs to the Section Medical Imaging)
Show Figures

Figure 1

22 pages, 5482 KiB  
Article
Evaluation of Focus Measures for Hyperspectral Imaging Microscopy Using Principal Component Analysis
by Humbat Nasibov
J. Imaging 2024, 10(10), 240; https://doi.org/10.3390/jimaging10100240 - 26 Sep 2024
Viewed by 663
Abstract
An automatic focusing system is a crucial component of automated microscopes, adjusting the lens-to-object distance to find the optimal focus by maximizing the focus measure (FM) value. This study develops reliable autofocus methods for hyperspectral imaging microscope systems, essential for extracting accurate chemical [...] Read more.
An automatic focusing system is a crucial component of automated microscopes, adjusting the lens-to-object distance to find the optimal focus by maximizing the focus measure (FM) value. This study develops reliable autofocus methods for hyperspectral imaging microscope systems, essential for extracting accurate chemical and spatial information from hyperspectral datacubes. Since FMs are domain- and application-specific, commonly, their performance is evaluated using verified focus positions. For example, in optical microscopy, the sharpness/contrast of visual peculiarities of a sample under testing typically guides as an anchor to determine the best focus position, but this approach is challenging in hyperspectral imaging systems (HSISs), where instant two-dimensional hyperspectral images do not always possess human-comprehensible visual information. To address this, a principal component analysis (PCA) was used to define the optimal (“ideal”) optical focus position in HSIS, providing a benchmark for assessing 22 FMs commonly used in other imaging fields. Evaluations utilized hyperspectral images from visible (400–1100 nm) and near-infrared (900–1700 nm) bands across four different HSIS setups with varying magnifications. Results indicate that gradient-based FMs are the fastest and most reliable operators in this context. Full article
(This article belongs to the Section Color, Multi-spectral, and Hyperspectral Imaging)
Show Figures

Figure 1

26 pages, 17483 KiB  
Article
A Survey on Explainable Artificial Intelligence (XAI) Techniques for Visualizing Deep Learning Models in Medical Imaging
by Deepshikha Bhati, Fnu Neha and Md Amiruzzaman
J. Imaging 2024, 10(10), 239; https://doi.org/10.3390/jimaging10100239 - 25 Sep 2024
Viewed by 2183
Abstract
The combination of medical imaging and deep learning has significantly improved diagnostic and prognostic capabilities in the healthcare domain. Nevertheless, the inherent complexity of deep learning models poses challenges in understanding their decision-making processes. Interpretability and visualization techniques have emerged as crucial tools [...] Read more.
The combination of medical imaging and deep learning has significantly improved diagnostic and prognostic capabilities in the healthcare domain. Nevertheless, the inherent complexity of deep learning models poses challenges in understanding their decision-making processes. Interpretability and visualization techniques have emerged as crucial tools to unravel the black-box nature of these models, providing insights into their inner workings and enhancing trust in their predictions. This survey paper comprehensively examines various interpretation and visualization techniques applied to deep learning models in medical imaging. The paper reviews methodologies, discusses their applications, and evaluates their effectiveness in enhancing the interpretability, reliability, and clinical relevance of deep learning models in medical image analysis. Full article
(This article belongs to the Section Medical Imaging)
Show Figures

Figure 1

Previous Issue
Next Issue
Back to TopTop