Sign in to use this feature.

Years

Between: -

Subjects

remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline

Journals

remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline

Article Types

Countries / Regions

remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline

Search Results (15,875)

Search Parameters:
Keywords = images classification

Order results
Result details
Results per page
Select all
Export citation of selected articles as:
23 pages, 2640 KiB  
Article
DenseNet-Based Classification of EEG Abnormalities Using Spectrograms
by Lan Wei and Catherine Mooney
Algorithms 2025, 18(8), 486; https://doi.org/10.3390/a18080486 - 5 Aug 2025
Abstract
Electroencephalogram (EEG) analysis is essential for diagnosing neurological disorders but typically requires expert interpretation and significant time. Purpose: This study aims to automate the classification of normal and abnormal EEG recordings to support clinical diagnosis and reduce manual workload. Automating the initial screening [...] Read more.
Electroencephalogram (EEG) analysis is essential for diagnosing neurological disorders but typically requires expert interpretation and significant time. Purpose: This study aims to automate the classification of normal and abnormal EEG recordings to support clinical diagnosis and reduce manual workload. Automating the initial screening of EEGs can help clinicians quickly identify potential neurological abnormalities, enabling timely intervention and guiding further diagnostic and treatment strategies. Methodology: We utilized the Temple University Hospital EEG dataset to develop a DenseNet-based deep learning model. To enable a fair comparison of different EEG representations, we used three input types: signal images, spectrograms, and scalograms. To reduce dimensionality and simplify computation, we focused on two channels: T5 and O1. For interpretability, we applied Local Interpretable Model-agnostic Explanations (LIME) and Gradient-weighted Class Activation Mapping (Grad-CAM) to visualize the EEG regions influencing the model’s predictions. Key Findings: Among the input types, spectrogram-based representations achieved the highest classification accuracy, indicating that time-frequency features are especially effective for this task. The model demonstrated strong performance overall, and the integration of LIME and Grad-CAM provided transparent explanations of its decisions, enhancing interpretability. This approach offers a practical and interpretable solution for automated EEG screening, contributing to more efficient clinical workflows and better understanding of complex neurological conditions. Full article
(This article belongs to the Special Issue AI-Assisted Medical Diagnostics)
Show Figures

Figure 1

27 pages, 37457 KiB  
Article
Multi-Sensor Flood Mapping in Urban and Agricultural Landscapes of the Netherlands Using SAR and Optical Data with Random Forest Classifier
by Omer Gokberk Narin, Aliihsan Sekertekin, Caglar Bayik, Filiz Bektas Balcik, Mahmut Arıkan, Fusun Balik Sanli and Saygin Abdikan
Remote Sens. 2025, 17(15), 2712; https://doi.org/10.3390/rs17152712 - 5 Aug 2025
Abstract
Floods stand as one of the most harmful natural disasters, which have become more dangerous because of climate change effects on urban structures and agricultural fields. This research presents a comprehensive flood mapping approach that combines multi-sensor satellite data with a machine learning [...] Read more.
Floods stand as one of the most harmful natural disasters, which have become more dangerous because of climate change effects on urban structures and agricultural fields. This research presents a comprehensive flood mapping approach that combines multi-sensor satellite data with a machine learning method to evaluate the July 2021 flood in the Netherlands. The research developed 25 different feature scenarios through the combination of Sentinel-1, Landsat-8, and Radarsat-2 imagery data by using backscattering coefficients together with optical Normalized Difference Water Index (NDWI) and Hue, Saturation, and Value (HSV) images and Synthetic Aperture Radar (SAR)-derived Grey Level Co-occurrence Matrix (GLCM) texture features. The Random Forest (RF) classifier was optimized before its application based on two different flood-prone regions, which included Zutphen’s urban area and Heijen’s agricultural land. Results demonstrated that the multi-sensor fusion scenarios (S18, S20, and S25) achieved the highest classification performance, with overall accuracy reaching 96.4% (Kappa = 0.906–0.949) in Zutphen and 87.5% (Kappa = 0.754–0.833) in Heijen. For the flood class F1 scores of all scenarios, they varied from 0.742 to 0.969 in Zutphen and from 0.626 to 0.969 in Heijen. Eventually, the addition of SAR texture metrics enhanced flood boundary identification throughout both urban and agricultural settings. Radarsat-2 provided limited benefits to the overall results, since Sentinel-1 and Landsat-8 data proved more effective despite being freely available. This study demonstrates that using SAR and optical features together with texture information creates a powerful and expandable flood mapping system, and RF classification performs well in diverse landscape settings. Full article
(This article belongs to the Special Issue Remote Sensing Applications in Flood Forecasting and Monitoring)
27 pages, 11710 KiB  
Article
Assessing ResNeXt and RegNet Models for Diabetic Retinopathy Classification: A Comprehensive Comparative Study
by Samara Acosta-Jiménez, Valeria Maeda-Gutiérrez, Carlos E. Galván-Tejada, Miguel M. Mendoza-Mendoza, Luis C. Reveles-Gómez, José M. Celaya-Padilla, Jorge I. Galván-Tejada and Antonio García-Domínguez
Diagnostics 2025, 15(15), 1966; https://doi.org/10.3390/diagnostics15151966 - 5 Aug 2025
Abstract
Background/Objectives: Diabetic retinopathy is a leading cause of vision impairment worldwide, and the development of reliable automated classification systems is crucial for early diagnosis and clinical decision-making. This study presents a comprehensive comparative evaluation of two state-of-the-art deep learning families for the task [...] Read more.
Background/Objectives: Diabetic retinopathy is a leading cause of vision impairment worldwide, and the development of reliable automated classification systems is crucial for early diagnosis and clinical decision-making. This study presents a comprehensive comparative evaluation of two state-of-the-art deep learning families for the task of classifying diabetic retinopathy using retinal fundus images. Methods: The models were trained and tested in both binary and multi-class settings. The experimental design involved partitioning the data into training (70%), validation (20%), and testing (10%) sets. Model performance was assessed using standard metrics, including precision, sensitivity, specificity, F1-score, and the area under the receiver operating characteristic curve. Results: In binary classification, the ResNeXt101-64x4d model and RegNetY32GT model demonstrated outstanding performance, each achieving high sensitivity and precision. For multi-class classification, ResNeXt101-32x8d exhibited strong performance in early stages, while RegNetY16GT showed better balance across all stages, particularly in advanced diabetic retinopathy cases. To enhance transparency, SHapley Additive exPlanations were employed to visualize the pixel-level contributions for each model’s predictions. Conclusions: The findings suggest that while ResNeXt models are effective in detecting early signs, RegNet models offer more consistent performance in distinguishing between multiple stages of diabetic retinopathy severity. This dual approach combining quantitative evaluation and model interpretability supports the development of more robust and clinically trustworthy decision support systems for diabetic retinopathy screening. Full article
Show Figures

Figure 1

31 pages, 1811 KiB  
Article
Fractal-Inspired Region-Weighted Optimization and Enhanced MobileNet for Medical Image Classification
by Yichuan Shao, Jiapeng Yang, Wen Zhou, Haijing Sun and Qian Gao
Fractal Fract. 2025, 9(8), 511; https://doi.org/10.3390/fractalfract9080511 - 5 Aug 2025
Abstract
In the field of deep learning, the design of optimization algorithms and neural network structures is crucial for improving model performance. Recent advances in medical image analysis have revealed that many pathological features exhibit fractal-like characteristics in their spatial distribution and morphological patterns. [...] Read more.
In the field of deep learning, the design of optimization algorithms and neural network structures is crucial for improving model performance. Recent advances in medical image analysis have revealed that many pathological features exhibit fractal-like characteristics in their spatial distribution and morphological patterns. This observation has opened new possibilities for developing fractal-inspired deep learning approaches. In this study, we propose the following: (1) a novel Region-Module Adam (RMA) optimizer that incorporates fractal-inspired region-weighting to prioritize areas with higher fractal dimensionality, and (2) an ECA-Enhanced Shuffle MobileNet (ESM) architecture designed to capture multi-scale fractal patterns through its enhanced feature extraction modules. Our experiments demonstrate that this fractal-informed approach significantly improves classification accuracy compared to conventional methods. On gastrointestinal image datasets, the RMA algorithm achieved accuracies of 83.60%, 81.60%, and 87.30% with MobileNetV2, ShuffleNetV2, and ESM networks, respectively. For glaucoma fundus images, the corresponding accuracies reached 84.90%, 83.60%, and 92.73%. These results suggest that explicitly considering fractal properties in medical image analysis can lead to more effective diagnostic tools. Full article
15 pages, 6411 KiB  
Article
SCCM: An Interpretable Enhanced Transfer Learning Model for Improved Skin Cancer Classification
by Md. Rifat Aknda, Fahmid Al Farid, Jia Uddin, Sarina Mansor and Muhammad Golam Kibria
BioMedInformatics 2025, 5(3), 43; https://doi.org/10.3390/biomedinformatics5030043 - 5 Aug 2025
Abstract
Skin cancer is the most common cancer worldwide, for which early detection is crucial to improve survival rates. Visual inspection and biopsies have limitations, including being error-prone, costly, and time-consuming. Although several deep learning models have been developed, they demonstrate significant limitations. An [...] Read more.
Skin cancer is the most common cancer worldwide, for which early detection is crucial to improve survival rates. Visual inspection and biopsies have limitations, including being error-prone, costly, and time-consuming. Although several deep learning models have been developed, they demonstrate significant limitations. An interpretable and improved transfer learning model for binary skin cancer classification is proposed in this research, which uses the last convolutional block of VGG16 as the feature extractor. The methodology focuses on addressing the existing limitations in skin cancer classification, to support dermatologists and potentially saving lives through advanced, reliable, and accessible AI-driven diagnostic tools. Explainable AI is incorporated for the visualization and explanation of classifications. Multiple optimization techniques are applied to avoid overfitting, ensure stable training, and enhance the classification accuracy of dermoscopic images into benign and malignant classes. The proposed model shows 90.91% classification accuracy, which is better than state-of-the-art models and established approaches in skin cancer classification. An interactive desktop application integrating the model is developed, enabling real-time preliminary screening with offline access. Full article
(This article belongs to the Section Imaging Informatics)
Show Figures

Figure 1

17 pages, 1306 KiB  
Article
Rapid Salmonella Serovar Classification Using AI-Enabled Hyperspectral Microscopy with Enhanced Data Preprocessing and Multimodal Fusion
by MeiLi Papa, Siddhartha Bhattacharya, Bosoon Park and Jiyoon Yi
Foods 2025, 14(15), 2737; https://doi.org/10.3390/foods14152737 - 5 Aug 2025
Abstract
Salmonella serovar identification typically requires multiple enrichment steps using selective media, consuming considerable time and resources. This study presents a rapid, culture-independent method leveraging artificial intelligence (AI) to classify Salmonella serovars from rich hyperspectral microscopy data. Five serovars (Enteritidis, Infantis, Kentucky, Johannesburg, 4,[5],12:i:-) [...] Read more.
Salmonella serovar identification typically requires multiple enrichment steps using selective media, consuming considerable time and resources. This study presents a rapid, culture-independent method leveraging artificial intelligence (AI) to classify Salmonella serovars from rich hyperspectral microscopy data. Five serovars (Enteritidis, Infantis, Kentucky, Johannesburg, 4,[5],12:i:-) were analyzed from samples prepared using only sterilized de-ionized water. Hyperspectral data cubes were collected to generate single-cell spectra and RGB composite images representing the full microscopy field. Data analysis involved two parallel branches followed by multimodal fusion. The spectral branch compared manual feature selection with data-driven feature extraction via principal component analysis (PCA), followed by classification using conventional machine learning models (i.e., k-nearest neighbors, support vector machine, random forest, and multilayer perceptron). The image branch employed a convolutional neural network (CNN) to extract spatial features directly from images without predefined morphological descriptors. Using PCA-derived spectral features, the highest performing machine learning model achieved 81.1% accuracy, outperforming manual feature selection. CNN-based classification using image features alone yielded lower accuracy (57.3%) in this serovar-level discrimination. In contrast, a multimodal fusion model combining spectral and image features improved accuracy to 82.4% on the unseen test set while reducing overfitting on the train set. This study demonstrates that AI-enabled hyperspectral microscopy with multimodal fusion can streamline Salmonella serovar identification workflows. Full article
(This article belongs to the Special Issue Artificial Intelligence (AI) and Machine Learning for Foods)
Show Figures

Figure 1

22 pages, 4169 KiB  
Article
Multi-Scale Differentiated Network with Spatial–Spectral Co-Operative Attention for Hyperspectral Image Denoising
by Xueli Chang, Xiaodong Wang, Xiaoyu Huang, Meng Yan and Luxiao Cheng
Appl. Sci. 2025, 15(15), 8648; https://doi.org/10.3390/app15158648 (registering DOI) - 5 Aug 2025
Abstract
Hyperspectral image (HSI) denoising is a crucial step in image preprocessing as its effectiveness has a direct impact on the accuracy of subsequent tasks such as land cover classification, target recognition, and change detection. However, existing methods suffer from limitations in effectively integrating [...] Read more.
Hyperspectral image (HSI) denoising is a crucial step in image preprocessing as its effectiveness has a direct impact on the accuracy of subsequent tasks such as land cover classification, target recognition, and change detection. However, existing methods suffer from limitations in effectively integrating multi-scale features and adaptively modeling complex noise distributions, making it difficult to construct effective spatial–spectral joint representations. This often leads to issues like detail loss and spectral distortion, especially when dealing with complex mixed noise. To address these challenges, this paper proposes a multi-scale differentiated denoising network based on spatial–spectral cooperative attention (MDSSANet). The network first constructs a multi-scale image pyramid using three downsampling operations and independently models the features at each scale to better capture noise characteristics at different levels. Additionally, a spatial–spectral cooperative attention module (SSCA) and a differentiated multi-scale feature fusion module (DMF) are introduced. The SSCA module effectively captures cross-spectral dependencies and spatial feature interactions through parallel spectral channel and spatial attention mechanisms. The DMF module adopts a multi-branch parallel structure with differentiated processing to dynamically fuse multi-scale spatial–spectral features and incorporates a cross-scale feature compensation strategy to improve feature representation and mitigate information loss. The experimental results show that the proposed method outperforms state-of-the-art methods across several public datasets, exhibiting greater robustness and superior visual performance in tasks such as handling complex noise and recovering small targets. Full article
(This article belongs to the Special Issue Remote Sensing Image Processing and Application, 2nd Edition)
Show Figures

Figure 1

19 pages, 7531 KiB  
Article
Evaluating the Impact of 2D MRI Slice Orientation and Location on Alzheimer’s Disease Diagnosis Using a Lightweight Convolutional Neural Network
by Nadia A. Mohsin and Mohammed H. Abdulameer
J. Imaging 2025, 11(8), 260; https://doi.org/10.3390/jimaging11080260 - 5 Aug 2025
Abstract
Accurate detection of Alzheimer’s disease (AD) is critical yet challenging for early medical intervention. Deep learning methods, especially convolutional neural networks (CNNs), have shown promising potential for improving diagnostic accuracy using magnetic resonance imaging (MRI). This study aims to identify the most informative [...] Read more.
Accurate detection of Alzheimer’s disease (AD) is critical yet challenging for early medical intervention. Deep learning methods, especially convolutional neural networks (CNNs), have shown promising potential for improving diagnostic accuracy using magnetic resonance imaging (MRI). This study aims to identify the most informative combination of MRI slice orientation and anatomical location for AD classification. We propose an automated framework that first selects the most relevant slices using a feature entropy-based method applied to activation maps from a pretrained CNN model. For classification, we employ a lightweight CNN architecture based on depthwise separable convolutions to efficiently analyze the selected 2D MRI slices extracted from preprocessed 3D brain scans. To further interpret model behavior, an attention mechanism is integrated to analyze which feature level contributes the most to the classification process. The model is evaluated on three binary tasks: AD vs. mild cognitive impairment (MCI), AD vs. cognitively normal (CN), and MCI vs. CN. The experimental results show the highest accuracy (97.4%) in distinguishing AD from CN when utilizing the selected slices from the ninth axial segment, followed by the tenth segment of coronal and sagittal orientations. These findings demonstrate the significance of slice location and orientation in MRI-based AD diagnosis and highlight the potential of lightweight CNNs for clinical use. Full article
(This article belongs to the Section AI in Imaging)
Show Figures

Figure 1

21 pages, 904 KiB  
Article
Ensemble-Based Knowledge Distillation for Identification of Childhood Pneumonia
by Grega Vrbančič and Vili Podgorelec
Electronics 2025, 14(15), 3115; https://doi.org/10.3390/electronics14153115 - 5 Aug 2025
Abstract
Childhood pneumonia remains a key cause of global morbidity and mortality, highlighting the need for accurate and efficient diagnostic tools. Ensemble methods have proven to be among the most successful approaches for identifying childhood pneumonia from chest X-ray images. However, deploying large, complex [...] Read more.
Childhood pneumonia remains a key cause of global morbidity and mortality, highlighting the need for accurate and efficient diagnostic tools. Ensemble methods have proven to be among the most successful approaches for identifying childhood pneumonia from chest X-ray images. However, deploying large, complex convolutional neural network models in resource-constrained environments presents challenges due to their high computational demands. Therefore, we propose a novel ensemble-based knowledge distillation method for identifying childhood pneumonia from X-ray images, which utilizes an ensemble of classification models to distill the knowledge to a more efficient student model. Experiments conducted on a chest X-ray dataset show that the distilled student model achieves comparable (statistically not significantly different) predictive performance to that of the Stochastic Gradient with Warm Restarts ensemble method (F1-score on average 0.95 vs. 0.96, respectively), while significantly reducing inference time and decreasing FLOPs by a factor of 6.5. Based on the obtained results, the proposed method highlights the potential of knowledge distillation to enhance the efficiency of complex methods, making them more suitable for utilization in environments with limited computational resources. Full article
(This article belongs to the Special Issue Image Processing Based on Convolution Neural Network: 2nd Edition)
Show Figures

Figure 1

16 pages, 5104 KiB  
Article
Integrating OpenPose for Proactive Human–Robot Interaction Through Upper-Body Pose Recognition
by Shih-Huan Tseng, Jhih-Ciang Chiang, Cheng-En Shiue and Hsiu-Ping Yueh
Electronics 2025, 14(15), 3112; https://doi.org/10.3390/electronics14153112 - 5 Aug 2025
Abstract
This paper introduces a novel system that utilizes OpenPose for skeleton estimation to enable a tabletop robot to interact with humans proactively. By accurately recognizing upper-body poses based on the skeleton information, the robot autonomously approaches individuals and initiates conversations. The contributions of [...] Read more.
This paper introduces a novel system that utilizes OpenPose for skeleton estimation to enable a tabletop robot to interact with humans proactively. By accurately recognizing upper-body poses based on the skeleton information, the robot autonomously approaches individuals and initiates conversations. The contributions of this paper can be summarized into three main features. Firstly, we conducted a comprehensive data collection process, capturing five different table-front poses: looking down, looking at the screen, looking at the robot, resting the head on hands, and stretching both hands. These poses were selected to represent common interaction scenarios. Secondly, we designed the robot’s dialog content and movement patterns to correspond with the identified table-front poses. By aligning the robot’s responses with the specific pose, we aimed to create a more engaging and intuitive interaction experience for users. Finally, we performed an extensive evaluation by exploring the performance of three classification models—non-linear Support Vector Machine (SVM), Artificial Neural Network (ANN), and convolutional neural network (CNN)—for accurately recognizing table-front poses. We used an Asus Zenbo Junior robot to acquire images and leveraged OpenPose to extract 12 upper-body skeleton points as input for training the classification models. The experimental results indicate that the ANN model outperformed the other models, demonstrating its effectiveness in pose recognition. Overall, the proposed system not only showcases the potential of utilizing OpenPose for proactive human–robot interaction but also demonstrates its real-world applicability. By combining advanced pose recognition techniques with carefully designed dialog and movement patterns, the tabletop robot successfully engages with humans in a proactive manner. Full article
Show Figures

Figure 1

34 pages, 4124 KiB  
Article
Prompt-Gated Transformer with Spatial–Spectral Enhancement for Hyperspectral Image Classification
by Ruimin Han, Shuli Cheng, Shuoshuo Li and Tingjie Liu
Remote Sens. 2025, 17(15), 2705; https://doi.org/10.3390/rs17152705 - 4 Aug 2025
Abstract
Hyperspectral image (HSI) classification is an important task in the field of remote sensing, with far-reaching practical significance. Most Convolutional Neural Networks (CNNs) only focus on local spatial features and ignore global spectral dependencies, making it difficult to completely extract spectral information in [...] Read more.
Hyperspectral image (HSI) classification is an important task in the field of remote sensing, with far-reaching practical significance. Most Convolutional Neural Networks (CNNs) only focus on local spatial features and ignore global spectral dependencies, making it difficult to completely extract spectral information in HSI. In contrast, Vision Transformers (ViTs) are widely used in HSI due to their superior feature extraction capabilities. However, existing Transformer models have challenges in achieving spectral–spatial feature fusion and maintaining local structural consistency, making it difficult to strike a balance between global modeling capabilities and local representation. To this end, we propose a Prompt-Gated Transformer with a Spatial–Spectral Enhancement (PGTSEFormer) network, which includes a Channel Hybrid Positional Attention Module (CHPA) and Prompt Cross-Former (PCFormer). The CHPA module adopts a dual-branch architecture to concurrently capture spectral and spatial positional attention, thereby enhancing the model’s discriminative capacity for complex feature categories through adaptive weight fusion. PCFormer introduces a Prompt-Gated mechanism and grouping strategy to effectively model cross-regional contextual information, while maintaining local consistency, which significantly enhances the ability for long-distance dependent modeling. Experiments were conducted on five HSI datasets and the results showed that overall accuracies of 97.91%, 98.74%, 99.48%, 99.18%, and 92.57% were obtained on the Indian pines, Salians, Botswana, WHU-Hi-LongKou, and WHU-Hi-HongHu datasets. The experimental results show the effectiveness of our proposed approach. Full article
Show Figures

Figure 1

20 pages, 4576 KiB  
Article
Enhanced HoVerNet Optimization for Precise Nuclei Segmentation in Diffuse Large B-Cell Lymphoma
by Gei Ki Tang, Chee Chin Lim, Faezahtul Arbaeyah Hussain, Qi Wei Oung, Aidy Irman Yajid, Sumayyah Mohammad Azmi and Yen Fook Chong
Diagnostics 2025, 15(15), 1958; https://doi.org/10.3390/diagnostics15151958 - 4 Aug 2025
Abstract
Background/Objectives: Diffuse Large B-Cell Lymphoma (DLBCL) is the most common subtype of non-Hodgkin lymphoma and demands precise segmentation and classification of nuclei for effective diagnosis and disease severity assessment. This study aims to evaluate the performance of HoVerNet, a deep learning model, [...] Read more.
Background/Objectives: Diffuse Large B-Cell Lymphoma (DLBCL) is the most common subtype of non-Hodgkin lymphoma and demands precise segmentation and classification of nuclei for effective diagnosis and disease severity assessment. This study aims to evaluate the performance of HoVerNet, a deep learning model, for nuclei segmentation and classification in CMYC-stained whole slide images and to assess its integration into a user-friendly diagnostic tool. Methods: A dataset of 122 CMYC-stained whole slide images (WSIs) was used. Pre-processing steps, including stain normalization and patch extraction, were applied to improve input consistency. HoVerNet, a multi-branch neural network, was used for both nuclei segmentation and classification, particularly focusing on its ability to manage overlapping nuclei and complex morphological variations. Model performance was validated using metrics such as accuracy, precision, recall, and F1 score. Additionally, a graphic user interface (GUI) was developed to incorporate automated segmentation, cell counting, and severity assessment functionalities. Results: HoVerNet achieved a validation accuracy of 82.5%, with a precision of 85.3%, recall of 82.6%, and an F1 score of 83.9%. The model showed powerful performance in differentiating overlapping and morphologically complex nuclei. The developed GUI enabled real-time visualization and diagnostic support, enhancing the efficiency and usability of DLBCL histopathological analysis. Conclusions: HoVerNet, combined with an integrated GUI, presents a promising approach for streamlining DLBCL diagnostics through accurate segmentation and real-time visualization. Future work will focus on incorporating Vision Transformers and additional staining protocols to improve generalizability and clinical utility. Full article
(This article belongs to the Special Issue Artificial Intelligence-Driven Radiomics in Medical Diagnosis)
Show Figures

Figure 1

14 pages, 1579 KiB  
Article
Predisposing Anatomical Patellofemoral Factors for Subsequent Patellar Dislocation
by Anna Kupczak, Bartłomiej Wilk, Ewa Tramś, Maciej Liszka, Bartosz Machnio, Aleksandra Jasiniewska, Jerzy Białecki and Rafał Kamiński
Life 2025, 15(8), 1239; https://doi.org/10.3390/life15081239 - 4 Aug 2025
Abstract
Background: Primary patellar dislocation is a relatively uncommon knee injury but carries a high risk of recurrence, particularly in young and physically active adolescent individuals. Anatomical features of the patellofemoral joint have been implicated as key contributors to instability. The purpose of this [...] Read more.
Background: Primary patellar dislocation is a relatively uncommon knee injury but carries a high risk of recurrence, particularly in young and physically active adolescent individuals. Anatomical features of the patellofemoral joint have been implicated as key contributors to instability. The purpose of this study was to evaluate anatomical risk factors associated with recurrent patellar dislocation following a primary traumatic event, using MRI-based parameters. Methods: Fifty-four patients who sustained a first-time lateral patellar dislocation were included. MRI was used to measure tibial tuberosity–trochlear groove (TT–TG) distance, tibial tuberosity–posterior cruciate ligament (TT–PCL) distance, Insall–Salvati ratio (IS), sulcus angle (SA), patellar tilt angle (PTA), patella length, and patellar tendon length. Trochlear dysplasia was assessed according to the Dejour classification. Recurrence was defined as a subsequent dislocation occurring within three years of the primary injury. Results: Significant differences were observed in TT–TG distance and patellar tendon length (p < 0.05). Patients with recurrent dislocation had lower TT–TG values and shorter patellar tendon lengths. Other parameters, including PTA, IS, and patella height, did not show statistically significant differences. Conclusion: Anatomical factors may contribute to the risk of recurrent patellar dislocation. Identifying these variables using imaging may support clinical decision making and guide individualized treatment plans following primary injury. Full article
(This article belongs to the Section Medical Research)
Show Figures

Figure 1

19 pages, 286 KiB  
Review
Does the Anatomical Type of the Plantaris Tendon Influence the Management of Midportion Achilles Tendinopathy?
by Łukasz Olewnik, Ingrid C. Landfald, Bartosz Gonera, Łukasz Gołek, Aleksandra Szabert-Kajkowska, Andrzej Borowski, Marek Drobniewski, Teresa Vázquez and Kacper Ruzik
J. Clin. Med. 2025, 14(15), 5478; https://doi.org/10.3390/jcm14155478 - 4 Aug 2025
Abstract
Background: Midportion Achilles tendinopathy (Mid-AT) is a complex condition that may be exacerbated by anatomical variations of the plantaris tendon. Recent anatomical studies, particularly the classification proposed by Olewnik et al., have enhanced the understanding of plantaris–Achilles interactions and their clinical implications. Objective: [...] Read more.
Background: Midportion Achilles tendinopathy (Mid-AT) is a complex condition that may be exacerbated by anatomical variations of the plantaris tendon. Recent anatomical studies, particularly the classification proposed by Olewnik et al., have enhanced the understanding of plantaris–Achilles interactions and their clinical implications. Objective: This review aims to assess the anatomical types of the plantaris tendon, their imaging correlates, and the impact of the Olewnik classification on diagnosis, treatment planning, and surgical outcomes in patients with Mid-AT. Methods: We present an evidence-based analysis of the six anatomical types of the plantaris tendon and their relevance to Achilles tendinopathy, with emphasis on MRI and ultrasound (USG) evaluation. A diagnostic and therapeutic algorithm is proposed, and clinical outcomes of both conservative and operative management are compared across tendon types. Results: Types I and V were most strongly associated with symptomatic conflict and showed the highest benefit from surgical resection. Endoscopic approaches were effective in Types II and III, while Type IV typically responded to conservative treatment. Type VI, often misdiagnosed as tarsal tunnel syndrome, required combined neurolysis. The classification significantly improves surgical decision-making, reduces overtreatment, and enhances diagnostic precision. Conclusions: The Olewnik classification provides a reproducible, clinically relevant framework for individualized management of Mid-AT. Its integration into imaging protocols and treatment algorithms may improve therapeutic outcomes and guide future research in orthopaedic tendon pathology. Full article
(This article belongs to the Section Orthopedics)
16 pages, 3834 KiB  
Article
Deep Learning Tongue Cancer Detection Method Based on Mueller Matrix Microscopy Imaging
by Hanyue Wei, Yingying Luo, Feiya Ma and Liyong Ren
Optics 2025, 6(3), 35; https://doi.org/10.3390/opt6030035 - 4 Aug 2025
Abstract
Tongue cancer, the most aggressive subtype of oral cancer, presents critical challenges due to the limited number of specialists available and the time-consuming nature of conventional histopathological diagnosis. To address these issues, we developed an intelligent diagnostic system integrating Mueller matrix microscopy with [...] Read more.
Tongue cancer, the most aggressive subtype of oral cancer, presents critical challenges due to the limited number of specialists available and the time-consuming nature of conventional histopathological diagnosis. To address these issues, we developed an intelligent diagnostic system integrating Mueller matrix microscopy with deep learning to enhance diagnostic accuracy and efficiency. Through Mueller matrix polar decomposition and transformation, micro-polarization feature parameter images were extracted from tongue cancer tissues, and purity parameter images were generated by calculating the purity of the Mueller matrices. A multi-stage feature dataset of Mueller matrix parameter images was constructed using histopathological samples of tongue cancer tissues with varying stages. Based on this dataset, the clinical potential of Mueller matrix microscopy was preliminarily validated for histopathological diagnosis of tongue cancer. Four mainstream medical image classification networks—AlexNet, ResNet50, DenseNet121 and VGGNet16—were employed to quantitatively evaluate the classification performance for tongue cancer stages. DenseNet121 achieved the highest classification accuracy of 98.48%, demonstrating its potential as a robust framework for rapid and accurate intelligent diagnosis of tongue cancer. Full article
Show Figures

Figure 1

Back to TopTop