applsci-logo

Journal Browser

Journal Browser

Recent Advances in and Applications of Medical Image Processing and Analysis

A special issue of Applied Sciences (ISSN 2076-3417). This special issue belongs to the section "Biomedical Engineering".

Deadline for manuscript submissions: closed (20 April 2025) | Viewed by 11539

Special Issue Editors


E-Mail Website
Guest Editor
Department of Electrical Engineering, Universidade Federal do Piauí, Picos, Brazil
Interests: digital image processing; computer vision; artificial intelligence; bioinformatics

E-Mail Website
Guest Editor
Department of Electrical Engineering, Universidade Federal do Piauí, Picos, Brazil
Interests: computer intelligence; computer vision; medical image processing; data analysis; computer graphics

E-Mail Website
Guest Editor
Department of Electrical Engineering, Universidade Federal do Piauí, Picos, Brazil
Interests: computer vision; deep learning; machine learning

Special Issue Information

Dear Colleagues,

In the ever-evolving field of healthcare, medical image processing and analysis have emerged as crucial pillars, revolutionizing diagnostics, treatment planning, and research. This Special Issue, titled "Recent Advances in and Applications of Medical Image Processing and Analysis," showcases the cutting-edge developments and practical applications in this dynamic domain. This collection of articles brings together experts, researchers, and innovators to present a comprehensive overview of the latest breakthroughs and their transformative impact on healthcare.

Prof. Dr. Romuere Silva
Dr. Antonio Oseas De Carvalho FIlho
Prof. Dr. Flávio Henrique Duarte de Araújo
Guest Editors

Manuscript Submission Information

Manuscripts should be submitted online at www.mdpi.com by registering and logging in to this website. Once you are registered, click here to go to the submission form. Manuscripts can be submitted until the deadline. All submissions that pass pre-check are peer-reviewed. Accepted papers will be published continuously in the journal (as soon as accepted) and will be listed together on the special issue website. Research articles, review articles as well as short communications are invited. For planned papers, a title and short abstract (about 100 words) can be sent to the Editorial Office for announcement on this website.

Submitted manuscripts should not have been published previously, nor be under consideration for publication elsewhere (except conference proceedings papers). All manuscripts are thoroughly refereed through a single-blind peer-review process. A guide for authors and other relevant information for submission of manuscripts is available on the Instructions for Authors page. Applied Sciences is an international peer-reviewed open access semimonthly journal published by MDPI.

Please visit the Instructions for Authors page before submitting a manuscript. The Article Processing Charge (APC) for publication in this open access journal is 2400 CHF (Swiss Francs). Submitted papers should be well formatted and use good English. Authors may use MDPI's English editing service prior to publication or during author revisions.

Keywords

  • medical imaging
  • image processing
  • machine learning
  • clinical applications
  • innovations

Benefits of Publishing in a Special Issue

  • Ease of navigation: Grouping papers by topic helps scholars navigate broad scope journals more efficiently.
  • Greater discoverability: Special Issues support the reach and impact of scientific research. Articles in Special Issues are more discoverable and cited more frequently.
  • Expansion of research network: Special Issues facilitate connections among authors, fostering scientific collaborations.
  • External promotion: Articles in Special Issues are often promoted through the journal's social media, increasing their visibility.
  • Reprint: MDPI Books provides the opportunity to republish successful Special Issues in book format, both online and in print.

Further information on MDPI's Special Issue policies can be found here.

Published Papers (9 papers)

Order results
Result details
Select all
Export citation of selected articles as:

Research

24 pages, 5039 KiB  
Article
EPIIC: Edge-Preserving Method Increasing Nuclei Clarity for Compression Artifacts Removal in Whole-Slide Histopathological Images
by Julia Merta and Michal Marczyk
Appl. Sci. 2025, 15(8), 4450; https://doi.org/10.3390/app15084450 - 17 Apr 2025
Viewed by 251
Abstract
Hematoxylin and eosin (HE) staining is widely used in medical diagnosis. Stained slides provide crucial information to diagnose or monitor the progress of many diseases. Due to the large size of scanned images of whole tissues, a JPEG algorithm is commonly used for [...] Read more.
Hematoxylin and eosin (HE) staining is widely used in medical diagnosis. Stained slides provide crucial information to diagnose or monitor the progress of many diseases. Due to the large size of scanned images of whole tissues, a JPEG algorithm is commonly used for compression. This lossy compression method introduces artifacts visible as 8 × 8 pixel blocks and reduces overall quality, which may negatively impact further analysis. We propose a fully unsupervised Edge-Preserving method Increasing nucleI Clarity (EPIIC) for removing compression artifacts from whole-slide HE-stained images. The method is introduced in two versions, EPIIC and EPIIC Sobel, composed of stain deconvolution, gradient-based edge map estimation, and weighted smoothing. The performance of the method was evaluated using two image quality measures, PSNR and SSIM, and various datasets, including BreCaHAD with HE-stained histopathological images and five other natural image datasets, and compared with other edge-preserving filtering methods and a deep learning-based solution. The impact of compression artifacts removal on the nuclei segmentation task was tested using Hover-Net and STARDIST models. The proposed methods led to improved image quality in histopathological and natural images and better segmentation of cell nuclei compared to other edge-preserving filtering methods. The biggest improvement was observed for images compressed with a low compression quality factor. Compared to the method using neural networks, the developed algorithms have slightly worse performance in image enhancement, but they are superior in nuclei segmentation. EPIIC and EPIIC Sobel can efficiently remove compression artifacts, positively impacting the segmentation results of cell nuclei and overall image quality. Full article
Show Figures

Figure 1

14 pages, 1518 KiB  
Article
Decoding Lung Cancer Radiogenomics: A Custom Clustering/Classification Methodology to Simultaneously Identify Important Imaging Features and Relevant Genes
by Destie Provenzano, John P. Lichtenberger, Sharad Goyal and Yuan James Rao
Appl. Sci. 2025, 15(7), 4053; https://doi.org/10.3390/app15074053 - 7 Apr 2025
Viewed by 366
Abstract
Background: This study evaluated a custom algorithm that sought to perform a radiogenomic analysis on lung cancer genetic and imaging data, specifically by using machine learning to see whether a custom clustering/classification method could simultaneously identify features from imaging data that correspond to [...] Read more.
Background: This study evaluated a custom algorithm that sought to perform a radiogenomic analysis on lung cancer genetic and imaging data, specifically by using machine learning to see whether a custom clustering/classification method could simultaneously identify features from imaging data that correspond to genetic markers. Methods: CT imaging data and genetic mutation data for 281 subjects with NSCLC were collected from the CPTAC-LUAD and TCGA-LUSC databases on TCIA. The algorithm was run as follows: (1) genetic clusters were initialized using random clusters, binary matrix factorization, or k-means; (2) image classification was run on CT data for these genetic clusters; (3) misclassified subjects were re-classified based on the image classification algorithm; and (4) the algorithm was run until an accuracy of 90% or no improvement after 10 runs. Input genetic mutations were evaluated for potential medical treatments and severity to provide clinical relevance. Results: The image classification algorithm was able to achieve a >90% accuracy after nine algorithm runs and grouped subjects from a starting five clusters to four final clusters, where final image classification accuracy was better than every initial clustered accuracy. These clusters were stable across all three test runs. A total of thirty-eight genes from the top hundred across each subject were identified with specific severity or treatment data; twelve of these genes are listed. Conclusion: This small pilot study presented a potential way to identify genetic patterns from image data and presented a methodology that could group images with no labels or only partial labels for future problems. Full article
Show Figures

Figure 1

21 pages, 5231 KiB  
Article
Stacked Ensembles Powering Smart Farming for Imbalanced Sugarcane Disease Detection
by Sahar Qaadan, Aiman Alshare, Abdullah Ahmed and Haneen Altartouri
Appl. Sci. 2025, 15(5), 2788; https://doi.org/10.3390/app15052788 - 5 Mar 2025
Viewed by 556
Abstract
Sugarcane is a vital crop, accounting for approximately 75% of the global sugar production. Ensuring its health through the early detection and classification of diseases is essential in maximizing crop yields and productivity. While recent deep learning advancements, such as Vision Transformers, have [...] Read more.
Sugarcane is a vital crop, accounting for approximately 75% of the global sugar production. Ensuring its health through the early detection and classification of diseases is essential in maximizing crop yields and productivity. While recent deep learning advancements, such as Vision Transformers, have shown promise in sugarcane disease classification, these methods often rely on resource-intensive models, limiting their practical applicability. This study introduces a novel stacking-based ensemble framework that combines embeddings from multiple state-of-the-art deep learning methods. It offers a lightweight and accurate approach for sugarcane disease classification. Leveraging the publicly available sugarcane leaf dataset, which includes 7134 high-resolution images across 11 classes (nine diseases, healthy leaves, and dried leaves), the proposed framework integrates embeddings from InceptionV3, SqueezeNet, and DeepLoc models with stacked ensemble classifiers. This approach addresses the challenges posed by imbalanced datasets and significantly enhances the classification performance. In binary classification, the model accuracy is 98.89% and the weighted F1-score is 98.92%, while the multi-classification approach attains accuracy of 95.64% and a weighted F1-score of 95.62%. The stacking-based framework is superior to Transformer models, reducing the training time by 75% and demonstrating superior generalization across diverse and imbalanced classes. These findings directly contribute to the sustainability goals of zero hunger and responsible consumption and production by improving agricultural productivity and promoting resource-efficient farming practices. Full article
Show Figures

Figure 1

21 pages, 1399 KiB  
Article
Optimizing Cervical Cancer Diagnosis with Feature Selection and Deep Learning
by Łukasz Jeleń, Izabela Stankiewicz-Antosz, Maria Chosia and Michał Jeleń
Appl. Sci. 2025, 15(3), 1458; https://doi.org/10.3390/app15031458 - 31 Jan 2025
Viewed by 816
Abstract
The main purpose of cervical cancer diagnosis is a correct and rapid detection of the disease and the determination of its histological type. This study investigates the effectiveness of combining handcrafted feature-based methods with convolutional neural networks for the determination of cancer histological [...] Read more.
The main purpose of cervical cancer diagnosis is a correct and rapid detection of the disease and the determination of its histological type. This study investigates the effectiveness of combining handcrafted feature-based methods with convolutional neural networks for the determination of cancer histological type, emphasizing the role of feature selection in enhancing classification accuracy. Here, a data set of liquid-based cytology images was analyzed and a set of handcrafted morphological features was introduced. Furthermore, features were optimized through advanced selection techniques, including stepwise and significant feature selection, to reduce feature dimensionality while retaining critical diagnostic information. These reduced feature sets were evaluated using several classifiers including support vector machines and compared with CNN-based approach, highlighting differences in accuracy and precision. The results demonstrate that optimized feature sets, paired with SVM classifiers, achieve classification performance comparable to those of CNNs while significantly reducing computational complexity. This finding underscores the potential of feature reduction techniques in creating efficient diagnostic frameworks. The study concludes that while convolutional neural networks offer robust classification capabilities, optimized handcrafted features remain a viable and cost-effective alternative, particularly when the data count is limited. This work contributes to advancing automated diagnostic systems by balancing accuracy, efficiency, and interpretability. Full article
Show Figures

Figure 1

17 pages, 13882 KiB  
Article
Accurate Needle Localization in the Image Frames of Ultrasound Videos
by Mohammad I. Daoud, Samira Khraiwesh, Rami Alazrai, Mostafa Z. Ali, Adnan Zayadeen, Sahar Qaadan and Rafiq Ibrahim Alhaddad
Appl. Sci. 2025, 15(1), 207; https://doi.org/10.3390/app15010207 - 29 Dec 2024
Viewed by 1022
Abstract
Ultrasound imaging provides real-time guidance during needle interventions, but localizing the needle in ultrasound videos remains a challenging task. This paper introduces a novel machine learning-based method to localize the needle in ultrasound videos. The method comprises three phases for analyzing the image [...] Read more.
Ultrasound imaging provides real-time guidance during needle interventions, but localizing the needle in ultrasound videos remains a challenging task. This paper introduces a novel machine learning-based method to localize the needle in ultrasound videos. The method comprises three phases for analyzing the image frames of the ultrasound video and localizing the needle in each image frame. The first phase aims to extract features that quantify the speckle variations associated with needle insertion, the edges that match the needle orientation, and the pixel intensity statistics of the ultrasound image. The features are analyzed using a machine learning classifier to generate a quantitative image that characterizes the pixels associated with the needle. In the second phase, the quantitative image is processed to identify the region of interest (ROI) that contains the needle. In the third phase, the ROI is processed using a custom-made Ranklet transform to accurately estimate the needle trajectory. Moreover, the needle tip is identified using a sliding window approach that analyzes the speckle variations along the needle trajectory. The performance of the proposed method was evaluated by localizing the needle in ex vivo and in vivo ultrasound videos. The results show that the proposed method was able to localize the needle with failure rates of 0%. The angular, axis, and tip errors computed for the ex vivo ultrasound videos are within the ranges of 0.3–0.7°, 0.2–0.7 mm, and 0.4–0.8 mm, respectively. Additionally, the angular, axis, and tip errors computed for the in vivo ultrasound videos are within the ranges of 0.2–1.0°, 0.3–1.0 mm, and 0.3–1.1 mm, respectively. A key advantage of the proposed method is the ability to achieve accurate localization of the needle without altering the clinical workflow of the intervention. Full article
Show Figures

Figure 1

15 pages, 5358 KiB  
Article
Volumetric Analysis of Aortic Changes after TEVAR Using Three-Dimensional Virtual Modeling
by Edoardo Rasciti, Laura Cercenelli, Barbara Bortolani, Paolo Luzi, Maria Dea Ippoliti, Luigi Lovato and Emanuela Marcelli
Appl. Sci. 2024, 14(16), 6948; https://doi.org/10.3390/app14166948 - 8 Aug 2024
Viewed by 1031
Abstract
TEVAR (thoracic endovascular aortic repair) is the preferred approach for treating descending thoracic aortic aneurysm (DTAA). After the procedure, patients require lifelong CTA (computed tomography angiography) follow-up to monitor the aorta’s remodeling process and the possible development of associated complications. With CTA, the [...] Read more.
TEVAR (thoracic endovascular aortic repair) is the preferred approach for treating descending thoracic aortic aneurysm (DTAA). After the procedure, patients require lifelong CTA (computed tomography angiography) follow-up to monitor the aorta’s remodeling process and the possible development of associated complications. With CTA, the aorta is usually measured with maximum diameters taken at specific locations, and even in experienced centers, this type of evaluation is prone to inter-observer variability. We introduce a new volumetric analysis of aortic changes after TEVAR using three-dimensional (3D) anatomical models. We applied the volumetric analysis to 24 patients who underwent TEVAR for DTAA. For each patient, the descending thoracic aorta was evaluated using both the maximum diameter from CTA and the volume from 3D reconstructions, at discharge and 12 months after TEVAR. Both volume and diameter evaluations were then related to the development of TEVAR complications. The group with TEVAR-related complications showed a 10% volume increase in the descending aorta, while the group with no TEVAR-related complications only had a 1% increase. An increase of 40 mL in the descending aorta volume at 12 months seemed to be predictive of complications, with 94% specificity and 75% sensitivity. Volumetric analysis is a promising method for monitoring DTAA remodeling after TEVAR, and it may help in the early identification of high-risk patients who may benefit from a stricter follow-up, even if further evaluations on a larger sample size are required to confirm these preliminary results. Full article
Show Figures

Figure 1

26 pages, 17816 KiB  
Article
The Automated Generation of Medical Reports from Polydactyly X-ray Images Using CNNs and Transformers
by Pablo de Abreu Vieira, Mano Joseph Mathew, Pedro de Alcantara dos Santos Neto and Romuere Rodrigues Veloso e Silva
Appl. Sci. 2024, 14(15), 6566; https://doi.org/10.3390/app14156566 - 27 Jul 2024
Viewed by 1545
Abstract
Pododactyl radiography is a non-invasive procedure that enables the detection of foot pathologies, as it provides detailed images of structures such as the metatarsus and phalanges, among others. This examination holds potential for employment in CAD systems. Our proposed methodology employs generative artificial [...] Read more.
Pododactyl radiography is a non-invasive procedure that enables the detection of foot pathologies, as it provides detailed images of structures such as the metatarsus and phalanges, among others. This examination holds potential for employment in CAD systems. Our proposed methodology employs generative artificial intelligence to analyze pododactyl radiographs and generate automatic medical reports. We used a dataset comprising 16,710 exams, including images and medical reports on pododactylys. We implemented preprocessing of the images and text, as well as data augmentation techniques to improve the representativeness of the dataset. The proposed CAD system integrates pre-trained CNNs for feature extraction from the images and Transformers for report interpretation and generation. Our objective is to provide reports describing pododactyl pathologies, such as plantar fasciitis, bunions, heel spurs, flat feet, and lesions, among others, offering a second opinion to the specialist. The results are promising, with BLEU scores (1 to 4) of 0.612, 0.552, 0.507, and 0.470, respectively, a METEOR score of 0.471, and a ROUGE-L score of 0.633, demonstrating the model’s ability to generate reports with qualities close to those produced by specialists. We demonstrate that generative AI trained with pododactyl radiographs has the potential to assist in diagnoses from these examinations. Full article
Show Figures

Figure 1

19 pages, 4027 KiB  
Article
A Deep Learning Model for Detecting Diabetic Retinopathy Stages with Discrete Wavelet Transform
by A. M. Mutawa, Khalid Al-Sabti, Seemant Raizada and Sai Sruthi
Appl. Sci. 2024, 14(11), 4428; https://doi.org/10.3390/app14114428 - 23 May 2024
Cited by 9 | Viewed by 3237
Abstract
Diabetic retinopathy (DR) is the primary factor leading to vision impairment and blindness in diabetics. Uncontrolled diabetes can damage the retinal blood vessels. Initial detection and prompt medical intervention are vital in preventing progressive vision impairment. Today’s growing medical field presents a more [...] Read more.
Diabetic retinopathy (DR) is the primary factor leading to vision impairment and blindness in diabetics. Uncontrolled diabetes can damage the retinal blood vessels. Initial detection and prompt medical intervention are vital in preventing progressive vision impairment. Today’s growing medical field presents a more significant workload and diagnostic demands on medical professionals. In the proposed study, a convolutional neural network (CNN) is employed to detect the stages of DR. This research is crucial for studying DR because of its innovative methodology incorporating two different public datasets. This strategy enhances the model’s capacity to generalize unseen DR images, as each dataset encompasses unique demographics and clinical circumstances. The network can learn and capture complicated hierarchical image features with asymmetric weights. Each image is preprocessed using contrast-limited adaptive histogram equalization and the discrete wavelet transform. The model is trained and validated using the combined datasets of Dataset for Diabetic Retinopathy and the Asia-Pacific Tele-Ophthalmology Society. The CNN model is tuned in with different learning rates and optimizers. An accuracy of 72% and an area under curve score of 0.90 was achieved by the CNN model with the Adam optimizer. The recommended study results may reduce diabetes-related vision impairment by early identification of DR severity. Full article
Show Figures

Figure 1

13 pages, 2233 KiB  
Article
Aspects of Lighting and Color in Classifying Malignant Skin Cancer with Deep Learning
by Alan R. F. Santos, Kelson R. T. Aires and Rodrigo M. S. Veras
Appl. Sci. 2024, 14(8), 3297; https://doi.org/10.3390/app14083297 - 14 Apr 2024
Cited by 2 | Viewed by 1609
Abstract
Malignant skin cancers are common in emerging countries, with excessive sun exposure and genetic predispositions being the main causes. Variations in lighting and color, resulting from the diversity of devices and lighting conditions during image capture, pose a challenge for automated diagnosis through [...] Read more.
Malignant skin cancers are common in emerging countries, with excessive sun exposure and genetic predispositions being the main causes. Variations in lighting and color, resulting from the diversity of devices and lighting conditions during image capture, pose a challenge for automated diagnosis through digital images. Deep learning techniques emerge as promising solutions to improve the accuracy of identifying malignant skin lesions. This work aims to investigate the impact of lighting and color correction methods on automated skin cancer diagnosis using deep learning architectures, focusing on the relevance of these characteristics for accuracy in identifying malignant skin cancer. The developed methodology includes steps for hair removal, lighting, and color correction, defining the region of interest, and classification using deep neural network architectures. We employed deep learning techniques such as LCDPNet, LLNeRF, and DSN for lighting and color correction, which still need to be tested in this context. The results emphasize the importance of image preprocessing, especially in lighting and color adjustments, where the best results show an accuracy increase of between 3% and 4%. We observed that different deep neural network architectures react variably to lighting and color corrections. Some architectures are more sensitive to variations in these characteristics, while others are more robust. Advanced lighting and color correction can thus significantly improve the accuracy of malignant skin cancer diagnosis. Full article
Show Figures

Figure 1

Back to TopTop