Sign in to use this feature.

Years

Between: -

Subjects

remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline

Journals

remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline

Article Types

Countries / Regions

remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline

Search Results (857)

Search Parameters:
Keywords = automatic diagnostic

Order results
Result details
Results per page
Select all
Export citation of selected articles as:
12 pages, 557 KiB  
Article
Advancing Diagnostics with Semi-Automatic Tear Meniscus Central Area Measurement for Aqueous Deficient Dry Eye Discrimination
by Hugo Pena-Verdeal, Jacobo Garcia-Queiruga, Belen Sabucedo-Villamarin, Carlos Garcia-Resua, Maria J. Giraldez and Eva Yebra-Pimentel
Medicina 2025, 61(8), 1322; https://doi.org/10.3390/medicina61081322 - 22 Jul 2025
Viewed by 180
Abstract
Background and Objectives: To clinically validate a semi-automatic measurement of Tear Meniscus Central Area (TMCA) to differentiate between Non-Aqueous Deficient Dry Eye (Non-ADDE) and Aqueous Deficient Dry Eye (ADDE) patients. Materials and Methods: 120 volunteer participants were included in the study. Following [...] Read more.
Background and Objectives: To clinically validate a semi-automatic measurement of Tear Meniscus Central Area (TMCA) to differentiate between Non-Aqueous Deficient Dry Eye (Non-ADDE) and Aqueous Deficient Dry Eye (ADDE) patients. Materials and Methods: 120 volunteer participants were included in the study. Following TFOS DEWS II diagnostic criteria, a battery of tests was conducted for dry eye diagnosis: Ocular Surface Disease Index questionnaire, tear film osmolarity, tear film break-up time, and corneal staining. Additionally, lower tear meniscus videos were captured with Tearscope illumination and, separately, with fluorescein using slit-lamp blue light and a yellow filter. Tear meniscus height was measured from Tearscope videos to differentiate Non-ADDE from ADDE participants, while TMCA was obtained from fluorescein videos. Both parameters were analyzed using the open-source software NIH ImageJ. Results: Receiver Operating Characteristics analysis showed that semi-automatic TMCA evaluation had significant diagnostic capability to differentiate between Non-ADDE and ADDE participants, with an optimal cut-off value to differentiate between the two groups of 54.62 mm2 (Area Under the Curve = 0.714 ± 0.051, p < 0.001; specificity: 71.7%; sensitivity: 68.9%). Conclusions: The semi-automatic TMCA evaluation showed preliminary valuable results as a diagnostic tool for distinguishing between ADDE and Non-ADDE individuals. Full article
(This article belongs to the Special Issue Advances in Diagnosis and Therapies of Ocular Diseases)
Show Figures

Figure 1

16 pages, 2557 KiB  
Article
Explainable AI for Oral Cancer Diagnosis: Multiclass Classification of Histopathology Images and Grad-CAM Visualization
by Jelena Štifanić, Daniel Štifanić, Nikola Anđelić and Zlatan Car
Biology 2025, 14(8), 909; https://doi.org/10.3390/biology14080909 - 22 Jul 2025
Viewed by 267
Abstract
Oral cancer is typically diagnosed through histological examination; however, the primary issue with this type of procedure is tumor heterogeneity, where a subjective aspect of the examination may have a direct effect on the treatment plan for a patient. To reduce inter- and [...] Read more.
Oral cancer is typically diagnosed through histological examination; however, the primary issue with this type of procedure is tumor heterogeneity, where a subjective aspect of the examination may have a direct effect on the treatment plan for a patient. To reduce inter- and intra-observer variability, artificial intelligence algorithms are often used as computational aids in tumor classification and diagnosis. This research proposes a two-step approach for automatic multiclass grading using oral histopathology images (the first step) and Grad-CAM visualization (the second step) to assist clinicians in diagnosing oral squamous cell carcinoma. The Xception architecture achieved the highest classification values of 0.929 (±σ = 0.087) AUCmacro and 0.942 (±σ = 0.074) AUCmicro. Additionally, Grad-CAM provided visual explanations of the model’s predictions by highlighting the precise areas of histopathology images that influenced the model’s decision. These results emphasize the potential of integrated AI algorithms in medical diagnostics, offering a more precise, dependable, and effective method for disease analysis. Full article
Show Figures

Figure 1

27 pages, 3888 KiB  
Article
Deep Learning-Based Algorithm for the Classification of Left Ventricle Segments by Hypertrophy Severity
by Wafa Baccouch, Bilel Hasnaoui, Narjes Benameur, Abderrazak Jemai, Dhaker Lahidheb and Salam Labidi
J. Imaging 2025, 11(7), 244; https://doi.org/10.3390/jimaging11070244 - 20 Jul 2025
Viewed by 322
Abstract
In clinical practice, left ventricle hypertrophy (LVH) continues to pose a considerable challenge, highlighting the need for more reliable diagnostic approaches. This study aims to propose an automated framework for the quantification of LVH extent and the classification of myocardial segments according to [...] Read more.
In clinical practice, left ventricle hypertrophy (LVH) continues to pose a considerable challenge, highlighting the need for more reliable diagnostic approaches. This study aims to propose an automated framework for the quantification of LVH extent and the classification of myocardial segments according to hypertrophy severity using a deep learning-based algorithm. The proposed method was validated on 133 subjects, including both healthy individuals and patients with LVH. The process starts with automatic LV segmentation using U-Net and the segmentation of the left ventricle cavity based on the American Heart Association (AHA) standards, followed by the division of each segment into three equal sub-segments. Then, an automated quantification of regional wall thickness (RWT) was performed. Finally, a convolutional neural network (CNN) was developed to classify each myocardial sub-segment according to hypertrophy severity. The proposed approach demonstrates strong performance in contour segmentation, achieving a Dice Similarity Coefficient (DSC) of 98.47% and a Hausdorff Distance (HD) of 6.345 ± 3.5 mm. For thickness quantification, it reaches a minimal mean absolute error (MAE) of 1.01 ± 1.16. Regarding segment classification, it achieves competitive performance metrics compared to state-of-the-art methods with an accuracy of 98.19%, a precision of 98.27%, a recall of 99.13%, and an F1-score of 98.7%. The obtained results confirm the high performance of the proposed method and highlight its clinical utility in accurately assessing and classifying cardiac hypertrophy. This approach provides valuable insights that can guide clinical decision-making and improve patient management strategies. Full article
(This article belongs to the Section Medical Imaging)
Show Figures

Figure 1

24 pages, 2267 KiB  
Article
A Mechanical Fault Diagnosis Method for On-Load Tap Changers Based on GOA-Optimized FMD and Transformer
by Ruifeng Wei, Zhenjiang Chen, Qingbo Wang, Yongsheng Duan, Hui Wang, Feiming Jiang, Daoyuan Liu and Xiaolong Wang
Energies 2025, 18(14), 3848; https://doi.org/10.3390/en18143848 - 19 Jul 2025
Viewed by 297
Abstract
Mechanical failures frequently occur in On-Load Tap Changers (OLTCs) during operation, potentially compromising the reliability and stability of power systems. The goal of this study is to develop an intelligent and accurate diagnostic approach for OLTC mechanical fault identification, particularly under the challenge [...] Read more.
Mechanical failures frequently occur in On-Load Tap Changers (OLTCs) during operation, potentially compromising the reliability and stability of power systems. The goal of this study is to develop an intelligent and accurate diagnostic approach for OLTC mechanical fault identification, particularly under the challenge of non-stationary vibration signals. To achieve this, a novel hybrid method is proposed that integrates the Gazelle Optimization Algorithm (GOA), Feature Mode Decomposition (FMD), and a Transformer-based classification model. Specifically, GOA is employed to automatically optimize key FMD parameters, including the number of filters (K), filter length (L), and number of decomposition modes (N), enabling high-resolution signal decomposition. From the resulting intrinsic mode functions (IMFs), statistical time domain features—peak factor, impulse factor, waveform factor, and clearance factor—are extracted to form feature vectors. After feature extraction, the resulting vectors are utilized by a Transformer to classify fault types. Benchmark comparisons with other decomposition and learning approaches highlight the enhanced performance of the proposed framework. The model achieves a 95.83% classification accuracy on the test set and an average of 96.7% under five-fold cross-validation, demonstrating excellent accuracy and generalization. What distinguishes this research is its incorporation of a GOA–FMD and a Transformer-based attention mechanism for pattern recognition into a unified and efficient diagnostic framework. With its high effectiveness and adaptability, the proposed framework shows great promise for real-world applications in the smart fault monitoring of power systems. Full article
Show Figures

Figure 1

17 pages, 3639 KiB  
Article
Automatic Fracture Detection Convolutional Neural Network with Multiple Attention Blocks Using Multi-Region X-Ray Data
by Rashadul Islam Sumon, Mejbah Ahammad, Md Ariful Islam Mozumder, Md Hasibuzzaman, Salam Akter, Hee-Cheol Kim, Mohammad Hassan Ali Al-Onaizan, Mohammed Saleh Ali Muthanna and Dina S. M. Hassan
Life 2025, 15(7), 1135; https://doi.org/10.3390/life15071135 - 18 Jul 2025
Viewed by 348
Abstract
Accurate detection of fractures in X-ray images is important to initiate appropriate medical treatment in time—in this study, an advanced combined attention CNN model with multiple attention mechanisms was developed to improve fracture detection by deeply representing features. Specifically, our model incorporates squeeze [...] Read more.
Accurate detection of fractures in X-ray images is important to initiate appropriate medical treatment in time—in this study, an advanced combined attention CNN model with multiple attention mechanisms was developed to improve fracture detection by deeply representing features. Specifically, our model incorporates squeeze blocks and convolutional block attention module (CBAM) blocks to improve the model’s ability to focus on relevant features in X-ray images. Using computed tomography X-ray images, this study assesses the diagnostic efficacy of the artificial intelligence (AI) model before and after optimization and compares its performance in detecting fractures or not. The training and evaluation dataset consists of fractured and non-fractured X-rays from various anatomical locations, including the hips, knees, lumbar region, lower limb, and upper limb. This gives an extremely high training accuracy of 99.98 and a validation accuracy 96.72. The attention-based CNN thus showcases its role in medical image analysis. This aspect further complements a point we highlighted through our research to establish the role of attention in CNN architecture-based models to achieve the desired score for fracture in a medical image, allowing the model to generalize. This study represents the first steps to improve fracture detection automatically. It also brings solid support to doctors addressing the continued time to examination, which also increases accuracy in diagnosing fractures, improving patients’ outcomes significantly. Full article
(This article belongs to the Section Radiobiology and Nuclear Medicine)
Show Figures

Figure 1

27 pages, 3817 KiB  
Article
A Deep Learning-Based Diagnostic Framework for Shaft Earthing Brush Faults in Large Turbine Generators
by Katudi Oupa Mailula and Akshay Kumar Saha
Energies 2025, 18(14), 3793; https://doi.org/10.3390/en18143793 - 17 Jul 2025
Viewed by 207
Abstract
Large turbine generators rely on shaft earthing brushes to safely divert harmful shaft currents to ground, protecting bearings from electrical damage. This paper presents a novel deep learning-based diagnostic framework to detect and classify faults in shaft earthing brushes of large turbine generators. [...] Read more.
Large turbine generators rely on shaft earthing brushes to safely divert harmful shaft currents to ground, protecting bearings from electrical damage. This paper presents a novel deep learning-based diagnostic framework to detect and classify faults in shaft earthing brushes of large turbine generators. A key innovation lies in the use of FFT-derived spectrograms from both voltage and current waveforms as dual-channel inputs to the CNN, enabling automatic feature extraction of time–frequency patterns associated with different SEB fault types. The proposed framework combines advanced signal processing and convolutional neural networks (CNNs) to automatically recognize fault-related patterns in shaft grounding current and voltage signals. In the approach, raw time-domain signals are converted into informative time–frequency representations, which serve as input to a CNN model trained to distinguish normal and faulty conditions. The framework was evaluated using data from a fleet of large-scale generators under various brush fault scenarios (e.g., increased brush contact resistance, loss of brush contact, worn out brushes, and brush contamination). Experimental results demonstrate high fault detection accuracy (exceeding 98%) and the reliable identification of different fault types, outperforming conventional threshold-based monitoring techniques. The proposed deep learning framework offers a novel intelligent monitoring solution for predictive maintenance of turbine generators. The contributions include the following: (1) the development of a specialized deep learning model for shaft earthing brush fault diagnosis, (2) a systematic methodology for feature extraction from shaft current signals, and (3) the validation of the framework on real-world fault data. This work enables the early detection of brush degradation, thereby reducing unplanned downtime and maintenance costs in power generation facilities. Full article
(This article belongs to the Section F: Electrical Engineering)
Show Figures

Figure 1

15 pages, 16898 KiB  
Article
Cross-Scale Hypergraph Neural Networks with Inter–Intra Constraints for Mitosis Detection
by Jincheng Li, Danyang Dong, Yihui Zhan, Guanren Zhu, Hengshuo Zhang, Xing Xie and Lingling Yang
Sensors 2025, 25(14), 4359; https://doi.org/10.3390/s25144359 - 12 Jul 2025
Viewed by 375
Abstract
Mitotic figures in tumor tissues are an important criterion for diagnosing malignant lesions, and physicians often search for the presence of mitosis in whole slide imaging (WSI). However, prolonged visual inspection by doctors may increase the likelihood of human error. With the advancement [...] Read more.
Mitotic figures in tumor tissues are an important criterion for diagnosing malignant lesions, and physicians often search for the presence of mitosis in whole slide imaging (WSI). However, prolonged visual inspection by doctors may increase the likelihood of human error. With the advancement of deep learning, AI-based automatic cytopathological diagnosis has been increasingly applied in clinical settings. Nevertheless, existing diagnostic models often suffer from high computational costs and suboptimal detection accuracy. More importantly, when assessing cellular abnormalities, doctors frequently compare target cells with their surrounding cells—an aspect that current models fail to capture due to their lack of intercellular information modeling, leading to the loss of critical medical insights. To address these limitations, we conducted an in-depth analysis of existing models and propose an Inter–Intra Hypergraph Neural Network (II-HGNN). Our model introduces a block-based feature extraction mechanism to efficiently capture deep representations. Additionally, we leverage hypergraph convolutional networks to process both intracellular and intercellular information, leading to more precise diagnostic outcomes. We evaluate our model on publicly available datasets under varying imaging conditions, and experimental results demonstrate that our approach consistently outperforms baseline models in terms of accuracy. Full article
(This article belongs to the Special Issue Recent Advances in Biomedical Imaging Sensors and Processing)
Show Figures

Figure 1

18 pages, 1663 KiB  
Article
CNN-Based Framework for Classifying COVID-19, Pneumonia, and Normal Chest X-Rays
by Cristian Randieri, Andrea Perrotta, Adriano Puglisi, Maria Grazia Bocci and Christian Napoli
Big Data Cogn. Comput. 2025, 9(7), 186; https://doi.org/10.3390/bdcc9070186 - 11 Jul 2025
Cited by 1 | Viewed by 524
Abstract
This paper describes the development of a CNN model for the analysis of chest X-rays and the automated diagnosis of pneumonia, bacterial or viral, and lung pathologies resulting from COVID-19, offering new insights for further research through the development of an AI-based diagnostic [...] Read more.
This paper describes the development of a CNN model for the analysis of chest X-rays and the automated diagnosis of pneumonia, bacterial or viral, and lung pathologies resulting from COVID-19, offering new insights for further research through the development of an AI-based diagnostic tool, which can be automatically implemented and made available for rapid differentiation between normal pneumonia and COVID-19 starting from X-ray images. The model developed in this work is capable of performing three-class classification, achieving 97.48% accuracy in distinguishing chest X-rays affected by COVID-19 from other pneumonias (bacterial or viral) and from cases defined as normal, i.e., without any obvious pathology. The novelty of our study is represented not only by the quality of the results obtained in terms of accuracy but, above all, by the reduced complexity of the model in terms of parameters and a shorter inference time compared to other models currently found in the literature. The excellent trade-off between the accuracy and computational complexity of our model allows for easy implementation on numerous embedded hardware platforms, such as FPGAs, for the creation of new diagnostic tools to support medical practice. Full article
Show Figures

Figure 1

19 pages, 13316 KiB  
Article
Mapping of Closed Depressions in Karst Terrains: A GIS-Based Delineation of Endorheic Catchments in the Alburni Massif (Southern Apennine, Italy)
by Libera Esposito, Guido Leone, Michele Ginolfi, Saman Abbasi Chenari and Francesco Fiorillo
Hydrology 2025, 12(7), 186; https://doi.org/10.3390/hydrology12070186 - 10 Jul 2025
Viewed by 368
Abstract
A deep interaction between groundwater and surface hydrology characterizes karst environments. These settings feature closed depressions, whose hydrological role varies depending on whether they have genetic and hydraulic relationships with overland–subsurface flow (epigenic) or deep groundwater circulation (hypogenic). Epigenic dolines and poljes are [...] Read more.
A deep interaction between groundwater and surface hydrology characterizes karst environments. These settings feature closed depressions, whose hydrological role varies depending on whether they have genetic and hydraulic relationships with overland–subsurface flow (epigenic) or deep groundwater circulation (hypogenic). Epigenic dolines and poljes are among the diagnostic landforms of karst terrains. In this study, we applied a hydrological criterion to map closed depressions—including dolines—across the Alburni karst massif, in southern Italy. A GIS-based, semi-automatic approach was employed, combining the sink-filling method (applied to a 5 m DEM) with the visual interpretation of various informative layers. This process produced a raster representing the location and depth of karst closed depressions. This raster was then used to automatically delineate endorheic areas using classic GIS tools. The resulting map reveals a thousand dolines and hundreds of adjacent endorheic areas. Endorheic areas form a complex mosaic across the massif, a feature that had been poorly emphasized in previous works. The main morphometric features of the dolines and endorheic areas were statistically analyzed and compared with the structural characteristics of the massif. The results of the proposed mapping approach provide valuable insights for groundwater management, karst area protection, recharge modeling, and tracer test planning. Full article
Show Figures

Figure 1

30 pages, 4399 KiB  
Article
Confident Learning-Based Label Correction for Retinal Image Segmentation
by Tanatorn Pethmunee, Supaporn Kansomkeat, Patama Bhurayanontachai and Sathit Intajag
Diagnostics 2025, 15(14), 1735; https://doi.org/10.3390/diagnostics15141735 - 8 Jul 2025
Viewed by 300
Abstract
Background/Objectives: In automatic medical image analysis, particularly for diabetic retinopathy, the accuracy of labeled data is crucial, as label noise can significantly complicate the analysis and lead to diagnostic errors. To tackle the issue of label noise in retinal image segmentation, an innovative [...] Read more.
Background/Objectives: In automatic medical image analysis, particularly for diabetic retinopathy, the accuracy of labeled data is crucial, as label noise can significantly complicate the analysis and lead to diagnostic errors. To tackle the issue of label noise in retinal image segmentation, an innovative label correction framework is introduced that combines Confident Learning (CL) with a human-in-the-loop re-annotation process to meticulously detect and rectify pixel-level labeling inaccuracies. Methods: Two CL-oriented strategies are assessed: Confident Joint Analysis (CJA) employing DeeplabV3+ with a ResNet-50 architecture, and Prune by Noise Rate (PBNR) utilizing ResNet-18. These methodologies are implemented on four publicly available retinal image datasets: HRF, STARE, DRIVE, and CHASE_DB1. After the models have been trained on the original labeled datasets, label noise is quantified, and amendments are executed on suspected misclassified pixels prior to the assessment of model performance. Results: The reduction in label noise yielded consistent advancements in accuracy, Intersection over Union (IoU), and weighted IoU across all the datasets. The segmentation of tiny structures, such as the fovea, demonstrated a significant enhancement following refinement. The Mean Boundary F1 Score (MeanBFScore) remained invariant, signifying the maintenance of boundary integrity. CJA and PBNR demonstrated strengths under different conditions, producing variations in performance that were dependent on the noise level and dataset characteristics. CL-based label correction techniques, when amalgamated with human refinement, could significantly enhance the segmentation accuracy and evaluation robustness for Accuracy, IoU, and MeanBFScore, achieving values of 0.9156, 0.8037, and 0.9856, respectively, with regard to the original ground truth, reflecting increases of 4.05%, 9.95%, and 1.28% respectively. Conclusions: This methodology represents a feasible and scalable solution to the challenge of label noise in medical image analysis, holding particular significance for real-world clinical applications. Full article
Show Figures

Figure 1

23 pages, 5584 KiB  
Article
Machine Learning and Deep Learning Hybrid Approach Based on Muscle Imaging Features for Diagnosis of Esophageal Cancer
by Yuan Hong, Hanlin Wang, Qi Zhang, Peng Zhang, Kang Cheng, Guodong Cao, Renquan Zhang and Bo Chen
Diagnostics 2025, 15(14), 1730; https://doi.org/10.3390/diagnostics15141730 - 8 Jul 2025
Viewed by 373
Abstract
Background: The rapid advancement of radiomics and artificial intelligence (AI) technology has provided novel tools for the diagnosis of esophageal cancer. This study innovatively combines muscle imaging features with conventional esophageal imaging features to construct deep learning diagnostic models. Methods: This [...] Read more.
Background: The rapid advancement of radiomics and artificial intelligence (AI) technology has provided novel tools for the diagnosis of esophageal cancer. This study innovatively combines muscle imaging features with conventional esophageal imaging features to construct deep learning diagnostic models. Methods: This retrospective study included 1066 patients undergoing radical esophagectomy. Preoperative computed tomography (CT) images covering esophageal, stomach, and muscle (bilateral iliopsoas and erector spinae) regions were segmented automatically with manual adjustments. Diagnostic models were developed using deep learning (2D and 3D neural networks) and traditional machine learning (11 algorithms with PyRadiomics-derived features). Multimodal features underwent Principal Component Analysis (PCA) for dimension reduction and were fused for final analysis. Results: Comparative analysis of 1066 patients’ CT imaging revealed the muscle-based model outperformed the esophageal plus stomach model in predicting N2 staging (0.63 ± 0.11 vs. 0.52 ± 0.11, p = 0.03). Subsequently, multimodal fusion models were established for predicting pathological subtypes, T staging, and N staging. The logistic regression (LR) fusion model showed optimal performance in predicting pathological subtypes, achieving accuracy (ACC) of 0.919 in the training set and 0.884 in the validation set. For predicting T staging, the support vector machine (SVM) model demonstrated the highest accuracy, with training and validation accuracies of 0.909 and 0.907, respectively. The multilayer perceptron (MLP) fusion model achieved the best performance among all models tested for N staging prediction, although the accuracy remained moderate (ACC = 0.704 in the training set and 0.685 in the validation set), indicating potential for further optimization. Fusion models significantly outperformed single-modality models. Conclusions: Based on CT imaging data from 1066 patients, this study systematically constructed predictive models for pathological subtypes, T staging, and N staging of esophageal cancer. Comparative analysis of models using esophageal, esophageal plus stomach, and muscle modalities demonstrated that muscle imaging features contribute to diagnostic accuracy. Multimodal fusion models consistently showed superior performance. Full article
Show Figures

Figure 1

27 pages, 14035 KiB  
Article
Unsupervised Segmentation and Classification of Waveform-Distortion Data Using Non-Active Current
by Andrea Mariscotti, Rafael S. Salles and Sarah K. Rönnberg
Energies 2025, 18(13), 3536; https://doi.org/10.3390/en18133536 - 4 Jul 2025
Viewed by 332
Abstract
Non-active current in the time domain is considered for application to the diagnostics and classification of loads in power grids based on waveform-distortion characteristics, taking as a working example several recordings of the pantograph current in an AC railway system. Data are processed [...] Read more.
Non-active current in the time domain is considered for application to the diagnostics and classification of loads in power grids based on waveform-distortion characteristics, taking as a working example several recordings of the pantograph current in an AC railway system. Data are processed with a deep autoencoder for feature extraction and then clustered via k-means to allow identification of patterns in the latent space. Clustering enables the evaluation of the relationship between the physical meaning and operation of the system and the distortion phenomena emerging in the waveforms during operation. Euclidean distance (ED) is used to measure the diversity and pertinence of observations within pattern groups and to identify anomalies (abnormal distortion, transients, …). This approach allows the classification of new data by assigning data to clusters based on proximity to centroids. This unsupervised method exploiting non-active current is novel and has proven useful for providing data with labels for later supervised learning performed with the 1D-CNN, which achieved a balanced accuracy of 96.46% under normal conditions. ED and 1D-CNN methods were tested on an additional unlabeled dataset and achieved 89.56% agreement in identifying normal states. Additionally, Grad-CAM, when applied to the 1D-CNN, quantitatively identifies the waveform parts that influence the model predictions, significantly enhancing the interpretability of the classification results. This is particularly useful for obtaining a better understanding of load operation, including anomalies that affect grid stability and energy efficiency. Finally, the method has been also successfully further validated for general applicability with data from a different scenario (charging of electric vehicles). The method can be applied to load identification and classification for non-intrusive load monitoring, with the aim of implementing automatic and unsupervised assessment of load behavior, including transient detection, power-quality issues and improvement in energy efficiency. Full article
(This article belongs to the Section F: Electrical Engineering)
Show Figures

Figure 1

15 pages, 5283 KiB  
Article
An Integrated System for Detecting and Numbering Permanent and Deciduous Teeth Across Multiple Types of Dental X-Ray Images Based on YOLOv8
by Ya-Yun Huang, Chiung-An Chen, Yi-Cheng Mao, Chih-Han Li, Bo-Wei Li, Tsung-Yi Chen, Wei-Chen Tu and Patricia Angela R. Abu
Diagnostics 2025, 15(13), 1693; https://doi.org/10.3390/diagnostics15131693 - 2 Jul 2025
Viewed by 496
Abstract
Background/Objectives: In dental medicine, the integration of various types of X-ray images, such as periapical (PA), bitewing (BW), and panoramic (PANO) radiographs, is crucial for comprehensive oral health assessment. These complementary imaging modalities provide diverse diagnostic perspectives and support the early detection of [...] Read more.
Background/Objectives: In dental medicine, the integration of various types of X-ray images, such as periapical (PA), bitewing (BW), and panoramic (PANO) radiographs, is crucial for comprehensive oral health assessment. These complementary imaging modalities provide diverse diagnostic perspectives and support the early detection of oral diseases, thereby enhancing treatment outcomes. However, there is currently no existing system that integrates multiple types of dental X-rays for both adults and children to perform tooth localization and numbering. Methods: Therefore, this study aimed to propose a system based on YOLOv8 that integrates multiple dental X-ray images and automatically detects and numbers both permanent and deciduous teeth. Through image preprocessing, various types of dental X-ray images were standardized and enhanced to improve the recognition accuracy of individual teeth. Results: With the implementation of a novel image preprocessing method, the system achieved a detection precision of 98.16% for permanent and deciduous teeth, representing a 3% improvement over models without image enhancement. In addition, the system attained an average tooth numbering accuracy of 98.5% for permanent teeth and 96.3% for deciduous teeth, surpassing existing methods by 5.6%. Conclusions: These results might highlight the innovation of the proposed image processing method and show its practical value in assisting clinicians with accurate diagnosis of tooth loss and the identification of missing teeth, ultimately contributing to improved diagnosis and treatment in dental care. Full article
(This article belongs to the Section Medical Imaging and Theranostics)
Show Figures

Figure 1

42 pages, 5287 KiB  
Article
Enhancing Early Detection of Oral Squamous Cell Carcinoma: A Deep Learning Approach with LRT-Enhanced EfficientNet-B3 for Accurate and Efficient Histopathological Diagnosis
by A. A. Abd El-Aziz, Mahmood A. Mahmood and Sameh Abd El-Ghany
Diagnostics 2025, 15(13), 1678; https://doi.org/10.3390/diagnostics15131678 - 1 Jul 2025
Viewed by 664
Abstract
Background/Objectives: Oral cancer, particularly oral squamous cell carcinoma (OSCC), ranks as the sixth most prevalent cancer globally, with rates of occurrence on the rise. The diagnosis of OSCC primarily depends on histopathological images (HIs), but this method can be time-intensive, expensive, and reliant [...] Read more.
Background/Objectives: Oral cancer, particularly oral squamous cell carcinoma (OSCC), ranks as the sixth most prevalent cancer globally, with rates of occurrence on the rise. The diagnosis of OSCC primarily depends on histopathological images (HIs), but this method can be time-intensive, expensive, and reliant on specialized expertise. Manual diagnosis often leads to inaccuracies and inconsistencies, highlighting the urgent need for automated and dependable diagnostic solutions to enhance early detection and treatment success. Methods: This research introduces a deep learning (DL) approach utilizing EfficientNet-B3, complemented by learning rate tuning (LRT), to identify OSCC from histopathological images. The model is designed to automatically modify the learning rate based on the accuracy and loss during training, which improves its overall performance. Results: When evaluated using the oral tumor dataset from the multi-cancer dataset, the model demonstrated impressive results, achieving an accuracy of 99.84% and a specificity of 99.92%, along with other strong performance metrics. Conclusions: These findings indicate its potential to simplify the diagnostic process, lower costs, and enhance patient outcomes in clinical settings. Full article
(This article belongs to the Special Issue Application of Artificial Intelligence to Oral Diseases)
Show Figures

Figure 1

15 pages, 4377 KiB  
Article
Quantitative Measures of Pure Ground-Glass Nodules from an Artificial Intelligence Software for Predicting Invasiveness of Pulmonary Adenocarcinoma on Low-Dose CT: A Multicenter Study
by Yu Long, Yong Li, Yongji Zheng, Wei Lin, Haomiao Qing, Peng Zhou and Jieke Liu
Biomedicines 2025, 13(7), 1600; https://doi.org/10.3390/biomedicines13071600 - 30 Jun 2025
Viewed by 367
Abstract
Objectives: Deep learning-based artificial intelligence (AI) tools have been gradually used to detect and segment pulmonary nodules in clinical practice. This study aimed to assess the diagnostic performance of quantitative measures derived from a commercially available AI software for predicting the invasiveness [...] Read more.
Objectives: Deep learning-based artificial intelligence (AI) tools have been gradually used to detect and segment pulmonary nodules in clinical practice. This study aimed to assess the diagnostic performance of quantitative measures derived from a commercially available AI software for predicting the invasiveness of pulmonary adenocarcinomas that manifested as pure ground-glass nodules (pGGNs) on low-dose CT (LDCT) in lung cancer screening. Methods: A total of 388 pGGNs were consecutively enrolled and divided into a training cohort (198 from center 1 between February 2019 and April 2022), testing cohort (99 from center 1 between April 2022 and March 2023), and external validation cohort (91 from centers 2 and 3 between January 2021 and August 2023). The automatically extracted quantitative measures included diameter, volume, attenuation, and mass. The diameter was also manually measured by radiologists. The agreement of diameter between AI and radiologists was evaluated by intra-class correlation coefficient (ICC) and Bland–Altman method. The diagnostic performance was evaluated by the area under curve (AUC) of receiver operating characteristic curve. Results: The ICCs of diameter between AI and radiologists were from 0.972 to 0.981 and Bland–Altman biases were from −1.9% to −2.3%. The mass showed the highest AUCs of 0.915 (0.867–0.950), 0.913 (0.840–0.960), and 0.893 (0.810–0.948) in the training, testing, and external validation cohorts, which were higher than those of diameters of radiologists and AI, volume, and attenuation (all p < 0.05). Conclusions: The automated measurement of pGGNs diameter using the AI software demonstrated comparable accuracy to that of radiologists on LDCT images. Among the quantitative measures of diameter, volume, attenuation, and mass, mass was the most optimal predictor of invasiveness in pulmonary adenocarcinomas on LDCT, which might be used to assist clinical decision of pGGNs during lung cancer screening. Full article
(This article belongs to the Special Issue Applications of Imaging Technology in Human Diseases)
Show Figures

Figure 1

Back to TopTop