Sign in to use this feature.

Years

Between: -

Subjects

remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline

Journals

remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline

Article Types

Countries / Regions

remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline

Search Results (481)

Search Parameters:
Keywords = unsupervised image classification

Order results
Result details
Results per page
Select all
Export citation of selected articles as:
30 pages, 3730 KB  
Article
Deep Learning Analysis of CBCT Images for Periodontal Disease: Phenotype-Level Concordance with Independent Transcriptomic and Microbiome Datasets
by Ștefan Lucian Burlea, Călin Gheorghe Buzea, Florin Nedeff, Diana Mirilă, Valentin Nedeff, Maricel Agop, Lăcrămioara Ochiuz and Adina Oana Armencia
Dent. J. 2025, 13(12), 578; https://doi.org/10.3390/dj13120578 - 3 Dec 2025
Viewed by 465
Abstract
Background: Periodontitis is a common inflammatory disease characterized by progressive loss of alveolar bone. Cone-beam computed tomography (CBCT) can visualize 3D periodontal bone defects, but its interpretation is time-consuming and examiner-dependent. Deep learning may support standardized CBCT assessment if performance and biological relevance [...] Read more.
Background: Periodontitis is a common inflammatory disease characterized by progressive loss of alveolar bone. Cone-beam computed tomography (CBCT) can visualize 3D periodontal bone defects, but its interpretation is time-consuming and examiner-dependent. Deep learning may support standardized CBCT assessment if performance and biological relevance are adequately characterized. Methods: We used the publicly available MMDental dataset (403 CBCT volumes from 403 patients) to train a 3D ResNet-18 classifier for binary discrimination between periodontitis and healthy status based on volumetric CBCT scans. Volumes were split by subject into training (n = 282), validation (n = 60), and test (n = 61) sets. Model performance was evaluated using area under the receiver operating characteristic curve (AUROC), area under the precision–recall curve (AUPRC), and calibration metrics with 95% bootstrap confidence intervals. Grad-CAM saliency maps were used to visualize the anatomical regions driving predictions. To explore phenotype-level biological concordance, we analyzed an independent gingival transcriptomic cohort (GSE10334, n ≈ 220 arrays after quality control) and an independent oral microbiome cohort based on 16S rRNA amplicon sequencing, using unsupervised clustering, differential expression/abundance testing, and pathway-level summaries. Results: On the held-out CBCT test set, the model achieved an AUROC of 0.729 (95% CI: 0.599–0.850) and an AUPRC of 0.551 (95% CI: 0.404–0.727). At a high-sensitivity operating point (sensitivity 0.95), specificity was 0.48, yielding an overall accuracy of 0.62. Grad-CAM maps consistently highlighted the alveolar crest and furcation regions in periodontitis cases, in line with expected patterns of bone loss. In the transcriptomic cohort, inferred periodontitis samples showed up-regulation of inflammatory and osteoclast-differentiation pathways and down-regulation of extracellular-matrix and mitochondrial programs. In the microbiome cohort, disease-associated samples displayed a dysbiotic shift with enrichment of classic periodontal pathogens and depletion of health-associated commensals. These omics patterns are consistent with an inflammatory–osteolytic phenotype that conceptually aligns with the CBCT-defined disease class. Conclusions: This study presents a proof-of-concept 3D deep learning model for CBCT-based periodontal disease classification that achieves moderate discriminative performance and anatomically plausible saliency patterns. Independent transcriptomic and microbiome analyses support phenotype-level biological concordance with the imaging-defined disease class, but do not constitute subject-level multimodal validation. Given the modest specificity, single-center imaging source, and inferred labels in the omics cohorts, our findings should be interpreted as exploratory and hypothesis-generating. Larger, multi-center CBCT datasets and prospectively collected paired imaging–omics cohorts are needed before clinical implementation can be considered. Full article
Show Figures

Figure 1

12 pages, 2454 KB  
Article
CLIP-Guided Clustering with Archetype-Based Similarity and Hybrid Segmentation for Robust Indoor Scene Classification
by Emi Yuda, Naoya Morikawa, Itaru Kaneko and Daisuke Hirahara
Electronics 2025, 14(23), 4571; https://doi.org/10.3390/electronics14234571 - 22 Nov 2025
Viewed by 404
Abstract
Accurate classification of indoor scenes remains a challenging problem in computer vision, particularly when datasets contain diverse room types and varying levels of contamination. We propose a novel method, CLIP-Guided Clustering, which introduces archetype-based similarity as a semantic feature space. Instead of directly [...] Read more.
Accurate classification of indoor scenes remains a challenging problem in computer vision, particularly when datasets contain diverse room types and varying levels of contamination. We propose a novel method, CLIP-Guided Clustering, which introduces archetype-based similarity as a semantic feature space. Instead of directly using raw image embeddings, we compute similarity scores between each image and predefined textual archetypes (e.g., “clean room,” “cluttered room with dry debris,” “moldy bathroom,” “room with workers”). These scores form low-dimensional semantic vectors that enable interpretable clustering via K-Means. To evaluate clustering robustness, we systematically explored UMAP parameter configurations (n_neighbors, min_dist) and identified the optimal setting (n_neighbors = 5, min_dist = 0.0) with the highest silhouette score (0.631). This objective analysis confirms that archetype-based representations improve separability compared with conventional visual embeddings. In addition, we developed a hybrid segmentation pipeline combining the Segment Anything Model (SAM), DeepLabV3, and pre-processing techniques to accurately extract floor regions even in low-quality or cluttered images. Together, these methods provide a principled framework for semantic classification and segmentation of residential environments. Beyond application-specific domains, our results demonstrate that combining vision–language models with segmentation networks offers a generalizable strategy for interpretable and robust scene understanding. Full article
Show Figures

Figure 1

29 pages, 13089 KB  
Article
A Class-Aware Unsupervised Domain Adaptation Framework for Cross-Continental Crop Classification with Sentinel-2 Time Series
by Shuang Li, Li Liu, Jinjie Huo, Shengyang Li, Yue Yin and Yonggang Ma
Remote Sens. 2025, 17(22), 3762; https://doi.org/10.3390/rs17223762 - 19 Nov 2025
Viewed by 690
Abstract
Accurate and large-scale crop mapping is crucial for global food security, yet its performance is often hindered by domain shift when models trained in one region are applied to another. This is particularly challenging in cross-continental scenarios where variations in climate, soil, and [...] Read more.
Accurate and large-scale crop mapping is crucial for global food security, yet its performance is often hindered by domain shift when models trained in one region are applied to another. This is particularly challenging in cross-continental scenarios where variations in climate, soil, and farming systems are significant. To address this, we propose PLCM (PSAE-LTAE + Class-aware MMD), an unsupervised domain adaptation (UDA) framework for crop classification using Sentinel-2 satellite image time series. The framework features two key innovations: (1) a Pixel-Set Attention Encoder (PSAE), which intelligently aggregates spatial features within parcels by assigning weights to individual pixels, enhancing robustness against noise and intra-parcel heterogeneity; and (2) a class-aware Maximum Mean Discrepancy (MMD) loss function that performs fine-grained feature alignment within each crop category, effectively mitigating negative transfer caused by domain shift while preserving class-discriminative information. We validated our framework on a challenging cross-continental, cross-year task, transferring a model trained on data from the source domain in the United States (2022) to an unlabeled target domain in Wensu County, Xinjiang, China (2024). The results demonstrate the robust performance of PLCM. While achieving a competitive overall Macro F1-score of 96.56%, comparable to other state-of-the-art UDA methods, its primary advantage is revealed in a granular per-class analysis. This analysis shows that PLCM provides a more balanced performance by particularly excelling at identifying difficult-to-adapt categories (e.g., Cotton), demonstrating practical robustness. Ablation studies further confirmed that both the PSAE module and the class-aware MMD strategy were critical to this performance gain. Our study shows that the PLCM framework can effectively learn domain-invariant and class-discriminative features, offering an effective and robust solution for high-accuracy, large-scale crop mapping across diverse geographical regions. Full article
(This article belongs to the Special Issue Advances in Remote Sensing for Crop Monitoring and Food Security)
Show Figures

Figure 1

26 pages, 5137 KB  
Article
Analyzing Surface Spectral Signature Shifts in Fire-Affected Areas of Elko County Nevada
by Ibtihaj Ahmad and Haroon Stephen
Fire 2025, 8(11), 429; https://doi.org/10.3390/fire8110429 - 31 Oct 2025
Viewed by 672
Abstract
This study investigates post-fire vegetation transitions and spectral responses in the Snowstorm Fire (2017) and South Sugarloaf Fire (2018) in Nevada using Landsat 8 Operational Land Imager (OLI) surface reflectance imagery and unsupervised ISODATA classification. By comparing pre-fire and post-fire conditions, we have [...] Read more.
This study investigates post-fire vegetation transitions and spectral responses in the Snowstorm Fire (2017) and South Sugarloaf Fire (2018) in Nevada using Landsat 8 Operational Land Imager (OLI) surface reflectance imagery and unsupervised ISODATA classification. By comparing pre-fire and post-fire conditions, we have assessed changes in vegetation composition, spectral signatures, and the emergence of novel land cover types. The results revealed widespread conversion of shrubland and conifer-dominated systems to herbaceous cover with significant reductions in near-infrared reflectance and elevated shortwave infrared responses, indicative of vegetation loss and surface alteration. In the South Sugarloaf Fire, three new spectral classes emerged post-fire, representing ash-dominated, charred, and sparsely vegetated conditions. A similar new class emerged in Snowstorm, highlighting the spatial heterogeneity of fire effects. Class stability analysis confirmed low persistence of shrub and conifer types, with grassland and herbaceous classes showing dominant post-fire expansion. The findings highlight the ecological consequences of high-severity fire in sagebrush ecosystems, including reduced resilience, increased invasion risk, and type conversion. Unsupervised classification and spectral signature analysis proved effective for capturing post-fire landscape change and can support more accurate, site-specific post-fire assessment and restoration planning. Full article
Show Figures

Figure 1

18 pages, 827 KB  
Article
Beyond Fixed Thresholds: Cluster-Derived MRI Boundaries Improve Assessment of Crohn’s Disease Activity
by Jelena Pilipovic Grubor, Sanja Stojanovic, Dijana Niciforovic, Marijana Basta Nikolic, Zoran D. Jelicic, Mirna N. Radovic and Jelena Ostojic
J. Clin. Med. 2025, 14(21), 7523; https://doi.org/10.3390/jcm14217523 - 23 Oct 2025
Viewed by 530
Abstract
Background/Objectives: Crohn’s disease (CD) requires precise, noninvasive monitoring to guide therapy and support treat-to-target management. Magnetic resonance enterography (MRE), particularly diffusion-weighted imaging (DWI), is the preferred cross-sectional technique for assessing small-bowel inflammation. Indices such as the Magnetic Resonance Index of Activity (MaRIA) and [...] Read more.
Background/Objectives: Crohn’s disease (CD) requires precise, noninvasive monitoring to guide therapy and support treat-to-target management. Magnetic resonance enterography (MRE), particularly diffusion-weighted imaging (DWI), is the preferred cross-sectional technique for assessing small-bowel inflammation. Indices such as the Magnetic Resonance Index of Activity (MaRIA) and its diffusion-weighted variant (DWI MaRIA) are widely used for grading disease activity. This study evaluated whether unsupervised clustering of MRI-derived features can complement these indices by providing more coherent and biologically grounded stratification of disease activity. Materials and Methods: Fifty patients with histologically confirmed CD underwent 1.5 T MRE. Of 349 bowel segments, 84 were pathological and classified using literature-based thresholds (MaRIA, DWI MaRIA) and unsupervised clustering. Differences between inactive, active, and severe disease were analyzed using multivariate analysis of variance (MANOVA), analysis of variance (ANOVA), and t-tests. Mahalanobis distances were calculated to quantify and compare separation between categories. Results: Using MaRIA thresholds, 5, 16, and 63 segments were classified as inactive, active, and severe (Mahalanobis distances 2.60, 4.95, 4.12). Clustering redistributed them into 22, 37, and 25 (9.26, 24.22, 15.27). For DWI MaRIA, 21, 14, and 49 segments were identified under thresholds (3.59, 5.72, 2.85) versus 21, 37, and 26 with clustering (7.40, 16.35, 9.41). Wall thickness dominated cluster-derived separation, supported by diffusion metrics and the apparent diffusion coefficient (ADC). Conclusions: Cluster-derived classification yielded clearer and more biologically consistent separation of disease-activity groups than fixed thresholds, emphasizing its potential to refine boundary definition, enhance MRI-based assessment, and inform future AI-driven diagnostic modeling. Full article
(This article belongs to the Section Gastroenterology & Hepatopancreatobiliary Medicine)
Show Figures

Figure 1

22 pages, 2618 KB  
Article
Improving Coronary Artery Disease Diagnosis in Cardiac MRI with Self-Supervised Learning
by Usman Khalid, Mehmet Kaya and Reda Alhajj
Diagnostics 2025, 15(20), 2618; https://doi.org/10.3390/diagnostics15202618 - 17 Oct 2025
Viewed by 517
Abstract
The Background/Objectives: The excessive dependence on data annotation, the lack of labeled data, and the substantial expense of data annotation, especially in healthcare, have constrained the efficacy of conventional supervised learning methodologies. Self-supervised learning (SSL) has arisen as a viable option by utilizing [...] Read more.
The Background/Objectives: The excessive dependence on data annotation, the lack of labeled data, and the substantial expense of data annotation, especially in healthcare, have constrained the efficacy of conventional supervised learning methodologies. Self-supervised learning (SSL) has arisen as a viable option by utilizing unlabeled data via pretext tasks. This paper examines the efficacy of supervised (pseudo-labels) and unsupervised (no pseudo-labels) pretext models in semi-supervised learning (SSL) for the classification of coronary artery disease (CAD) utilizing cardiac MRI data, highlighting performance in scenarios of data scarcity, out-of-distribution (OOD) conditions, and adversarial robustness. Methods: Two datasets, referred to as CAD Cardiac MRI and Ohio State Cardiac MRI Raw Data (OCMR), were utilized to establish three pretext tasks: (i) supervised Gaussian noise addition, (ii) supervised image rotation, and (iii) unsupervised generative reconstruction. These models were evaluated against  Simple Framework for Contrastive Learning (SimCLR), a prevalent unsupervised contrastive learning framework. Performance was assessed under three data reduction scenarios (20%, 50%, 70%), out-of-distribution situations, and adversarial attacks utilizing FGSM and PGD, alongside other significant evaluation criteria. Results: The Gaussian noise-based model attained the highest validation accuracy (up to 99.9%) across all data reduction scenarios and exhibited superiority over adversarial perturbations and all other employed measures. The rotation-based model exhibited considerable susceptibility to attacks and diminished accuracy with reduced data. The generative reconstruction model demonstrated moderate efficacy with minimal performance decline. SimCLR exhibited strong performance under standard conditions but shown inferior robustness relative to the Gaussian noise model. Conclusions: Meticulously crafted self-supervised pretext tasks exhibit potential in cardiac MRI classification, showcasing dependable performance and generalizability despite little data. These initial findings underscore SSL’s capacity to create reliable models for safety-critical healthcare applications and encourage more validation across varied datasets and clinical environments. Full article
Show Figures

Figure 1

31 pages, 1305 KB  
Review
Artificial Intelligence in Cardiac Electrophysiology: A Clinically Oriented Review with Engineering Primers
by Giovanni Canino, Assunta Di Costanzo, Nadia Salerno, Isabella Leo, Mario Cannataro, Pietro Hiram Guzzi, Pierangelo Veltri, Sabato Sorrentino, Salvatore De Rosa and Daniele Torella
Bioengineering 2025, 12(10), 1102; https://doi.org/10.3390/bioengineering12101102 - 13 Oct 2025
Cited by 2 | Viewed by 4358
Abstract
Artificial intelligence (AI) is transforming cardiac electrophysiology across the entire care pathway, from arrhythmia detection on 12-lead electrocardiograms (ECGs) and wearables to the guidance of catheter ablation procedures, through to outcome prediction and therapeutic personalization. End-to-end deep learning (DL) models have achieved cardiologist-level [...] Read more.
Artificial intelligence (AI) is transforming cardiac electrophysiology across the entire care pathway, from arrhythmia detection on 12-lead electrocardiograms (ECGs) and wearables to the guidance of catheter ablation procedures, through to outcome prediction and therapeutic personalization. End-to-end deep learning (DL) models have achieved cardiologist-level performance in rhythm classification and prognostic estimation on standard ECGs, with a reported arrhythmia classification accuracy of ≥95% and an atrial fibrillation detection sensitivity/specificity of ≥96%. The application of AI to wearable devices enables population-scale screening and digital triage pathways. In the electrophysiology (EP) laboratory, AI standardizes the interpretation of intracardiac electrograms (EGMs) and supports target selection, and machine learning (ML)-guided strategies have improved ablation outcomes. In patients with cardiac implantable electronic devices (CIEDs), remote monitoring feeds multiparametric models capable of anticipating heart-failure decompensation and arrhythmic risk. This review outlines the principal modeling paradigms of supervised learning (regression models, support vector machines, neural networks, and random forests) and unsupervised learning (clustering, dimensionality reduction, association rule learning) and examines emerging technologies in electrophysiology (digital twins, physics-informed neural networks, DL for imaging, graph neural networks, and on-device AI). However, major challenges remain for clinical translation, including an external validation rate below 30% and workflow integration below 20%, which represent core obstacles to real-world adoption. A joint clinical engineering roadmap is essential to translate prototypes into reliable, bedside tools. Full article
(This article belongs to the Special Issue Mathematical Models for Medical Diagnosis and Testing)
Show Figures

Figure 1

24 pages, 1698 KB  
Article
Deep Learning-Based Classification of Transformer Inrush and Fault Currents Using a Hybrid Self-Organizing Map and CNN Model
by Heungseok Lee, Sang-Hee Kang and Soon-Ryul Nam
Energies 2025, 18(20), 5351; https://doi.org/10.3390/en18205351 - 11 Oct 2025
Viewed by 529
Abstract
Accurate classification between magnetizing inrush currents and internal faults is essential for reliable transformer protection and stable power system operation. Because their transient waveforms are so similar, conventional differential protection and harmonic restraint techniques often fail under dynamic conditions. This study presents a [...] Read more.
Accurate classification between magnetizing inrush currents and internal faults is essential for reliable transformer protection and stable power system operation. Because their transient waveforms are so similar, conventional differential protection and harmonic restraint techniques often fail under dynamic conditions. This study presents a two-stage classification model that combines a self-organizing map (SOM) and a convolutional neural network (CNN) to enhance robustness and accuracy in distinguishing between inrush currents and internal faults in power transformers. In the first stage, an unsupervised SOM identifies topologically structured event clusters without the need for labeled data or predefined thresholds. Seven features are extracted from differential current signals to form fixed-length input vectors. These vectors are projected onto a two-dimensional SOM grid to capture inrush and fault distributions. In the second stage, the SOM’s activation maps are converted to grayscale images and classified by a CNN, thereby merging the interpretability of clustering with the performance of deep learning. Simulation data from a 154 kV MATLAB/Simulink transformer model includes inrush, internal fault, and overlapping events. Results show that after one cycle following fault inception, the proposed method improves accuracy (AC), precision (PR), recall (RC), and F1-score (F1s) by up to 3% compared with a conventional CNN model, demonstrating its suitability for real-time transformer protection. Full article
(This article belongs to the Special Issue Application of Artificial Intelligence in Electrical Power Systems)
Show Figures

Graphical abstract

12 pages, 1926 KB  
Article
Tracking False Lumen Remodeling with AI: A Variational Autoencoder Approach After Frozen Elephant Trunk Surgery
by Anja Osswald, Sharaf-Eldin Shehada, Matthias Thielmann, Alan B. Lumsden, Payam Akhyari and Christof Karmonik
J. Pers. Med. 2025, 15(10), 486; https://doi.org/10.3390/jpm15100486 - 11 Oct 2025
Viewed by 451
Abstract
Objective: False lumen (FL) thrombosis plays a key role in aortic remodeling after Frozen Elephant Trunk (FET) surgery, yet current imaging assessments are limited to categorical classifications. This study aimed to evaluate an unsupervised artificial intelligence (AI) algorithm based on a variational autoencoder [...] Read more.
Objective: False lumen (FL) thrombosis plays a key role in aortic remodeling after Frozen Elephant Trunk (FET) surgery, yet current imaging assessments are limited to categorical classifications. This study aimed to evaluate an unsupervised artificial intelligence (AI) algorithm based on a variational autoencoder (VAE) for automated, continuous quantification of FL thrombosis using serial computed tomography angiography (CTA). Methods: In this retrospective study, a VAE model was applied to axial CTA slices from 30 patients with aortic dissection who underwent FET surgery. The model encoded each image into a structured latent space, from which a continuous “thrombus score” was developed and derived to quantify the extent of FL thrombosis. Thrombus scores were compared between postoperative and follow-up scans to assess individual remodeling trajectories. Results: The VAE successfully encoded anatomical features of the false lumen into a structured latent space, enabling unsupervised classification of thrombus states. A continuous thrombus score was derived from this space, allowing slice-by-slice quantification of thrombus burden across the aorta. The algorithm demonstrated robust reconstruction accuracy and consistent separation of fully patent, partially thrombosed, and completely thrombosed lumen states without the need for manual annotation. Across the cohort, 50% of patients demonstrated an increase in thrombus score over time, 40% a decrease, and 10% remained unchanged. Despite these individual differences, no statistically significant change in overall thrombus burden was observed at the group level (p = 0.82), emphasizing the importance of individualized longitudinal assessment. Conclusions: The VAE-based method enables reproducible, annotation-free quantification of FL thrombosis and captures patient-specific remodeling patterns. This approach may enhance post-FET surveillance and supports the integration of AI-driven tools into personalized aortic imaging workflows. Full article
Show Figures

Figure 1

11 pages, 6412 KB  
Article
High-Throughput Evaluation of Mechanical Exfoliation Using Optical Classification of Two-Dimensional Materials
by Anthony Gasbarro, Yong-Sung D. Masuda and Victor M. Lubecke
Micromachines 2025, 16(10), 1084; https://doi.org/10.3390/mi16101084 - 25 Sep 2025
Viewed by 791
Abstract
Mechanical exfoliation remains the most common method for producing high-quality two-dimensional (2D) materials, but its inherently low yield requires screening large numbers of samples to identify usable flakes. Efficient optimization of the exfoliation process demands scalable methods to analyze deposited material across extensive [...] Read more.
Mechanical exfoliation remains the most common method for producing high-quality two-dimensional (2D) materials, but its inherently low yield requires screening large numbers of samples to identify usable flakes. Efficient optimization of the exfoliation process demands scalable methods to analyze deposited material across extensive datasets. While machine learning clustering techniques have demonstrated ~95% accuracy in classifying 2D material thicknesses from optical microscopy images, current tools are limited by slow processing speeds and heavy reliance on manual user input. This work presents an open-source, GPU-accelerated software platform that builds upon existing classification methods to enable high-throughput analysis of 2D material samples. By leveraging parallel computation, optimizing core algorithms, and automating preprocessing steps, the software can quantify flake coverage and thickness across uncompressed optical images at scale. Benchmark comparisons show that this implementation processes over 200× more pixel data with a 60× reduction in processing time relative to the original software. Specifically, a full dataset of2916 uncompressed images can be classified in 35 min, compared to an estimated 32 h required by the baseline method using compressed images. This platform enables rapid evaluation of exfoliation results across multiple trials, providing a practical tool for optimizing deposition techniques and improving the yield of high-quality 2D materials. Full article
Show Figures

Figure 1

37 pages, 1134 KB  
Article
SOMTreeNet: A Hybrid Topological Neural Model Combining Self-Organizing Maps and BIRCH for Structured Learning
by Yunus Doğan
Mathematics 2025, 13(18), 2958; https://doi.org/10.3390/math13182958 - 12 Sep 2025
Viewed by 867
Abstract
This study introduces SOMTreeNet, a novel hybrid neural model that integrates Self-Organizing Maps (SOMs) with BIRCH-inspired clustering features to address structured learning in a scalable and interpretable manner. Unlike conventional deep learning models, SOMTreeNet is designed with a recursive and modular topology that [...] Read more.
This study introduces SOMTreeNet, a novel hybrid neural model that integrates Self-Organizing Maps (SOMs) with BIRCH-inspired clustering features to address structured learning in a scalable and interpretable manner. Unlike conventional deep learning models, SOMTreeNet is designed with a recursive and modular topology that supports both supervised and unsupervised learning, enabling tasks such as classification, regression, clustering, anomaly detection, and time-series analysis. Extensive experiments were conducted using various publicly available datasets across five analytical domains: classification, regression, clustering, time-series forecasting, and image classification. These datasets cover heterogeneous structures including tabular, temporal, and visual data, allowing for a robust evaluation of the model’s generalizability. Experimental results demonstrate that SOMTreeNet consistently achieves competitive or superior performance compared to traditional machine learning and deep learning methods while maintaining a high degree of interpretability and adaptability. Its biologically inspired hierarchical structure facilitates transparent decision-making and dynamic model growth, making it particularly suitable for real-world applications that demand both accuracy and explainability. Overall, SOMTreeNet offers a versatile framework for learning from complex data while preserving the transparency and modularity often lacking in black-box models. Full article
(This article belongs to the Special Issue New Advances in Data Analytics and Mining)
Show Figures

Figure 1

19 pages, 38450 KB  
Article
Color Normalization in Breast Cancer Immunohistochemistry Images Based on Sparse Stain Separation and Self-Sparse Fuzzy Clustering
by Attasuntorn Traisuwan, Somchai Limsiroratana, Pornchai Phukpattaranont, Phiraphat Sutthimat and Pichaya Tandayya
Diagnostics 2025, 15(18), 2316; https://doi.org/10.3390/diagnostics15182316 - 12 Sep 2025
Viewed by 790
Abstract
Background and Objective: The color normalization of breast cancer immunohistochemistry (IHC)-stained images helps change the color distribution of undesirable IHC-stained images to be more interpretable for the pathologists. This will affect the Allred score that the pathologists use to estimate the drug [...] Read more.
Background and Objective: The color normalization of breast cancer immunohistochemistry (IHC)-stained images helps change the color distribution of undesirable IHC-stained images to be more interpretable for the pathologists. This will affect the Allred score that the pathologists use to estimate the drug quantity for treating breast cancer patients. Methods: A new color normalization technique based on sparse stain separation and self-sparse fuzzy clustering is proposed. Results: The quaternion structural similarity was used to measure the quality of the normalization algorithm. Our technique has a structural similarity score lower than other techniques, and the color distribution similarity is closer to the target. We applied automated and unsupervised nuclei classification with Automatic Color Deconvolution (ACD) to test the color features extracted from normalized images. Conclusions: The classification result from our unsupervised nuclei classification with ACD is similar to other normalization methods, but it offers an easier perception to the pathologists. Full article
(This article belongs to the Special Issue Medical Images Segmentation and Diagnosis)
Show Figures

Figure 1

5 pages, 1265 KB  
Abstract
Cover Thickness Prediction for Steel Inside Concrete by Sub-Terahertz Wave Using Deep Learning
by Ken Koyama, Tomoya Nishiwaki and Katsufumi Hashimoto
Proceedings 2025, 129(1), 40; https://doi.org/10.3390/proceedings2025129040 - 12 Sep 2025
Viewed by 393
Abstract
Deep learning techniques are increasingly being incorporated into the inspection and maintenance of social infrastructure. In this study, we show that when supervised deep learning was applied to imaging data obtained from sub-THz waves, the average recall exceeded 80% for all cover thicknesses [...] Read more.
Deep learning techniques are increasingly being incorporated into the inspection and maintenance of social infrastructure. In this study, we show that when supervised deep learning was applied to imaging data obtained from sub-THz waves, the average recall exceeded 80% for all cover thicknesses of steel plate inside concrete and more than 90% for rebar inside concrete with a cover thickness of up to 20 mm. Unsupervised deep learning enabled the classification for both steel plate and rebar, even at large cover thicknesses. These results are expected to improve the exploration depth, which has been limited in previous studies. Full article
Show Figures

Figure 1

23 pages, 3488 KB  
Article
Unsupervised Hyperspectral Band Selection Using Spectral–Spatial Iterative Greedy Algorithm
by Xin Yang and Wenhong Wang
Sensors 2025, 25(18), 5638; https://doi.org/10.3390/s25185638 - 10 Sep 2025
Viewed by 843
Abstract
Hyperspectral band selection (BS) is an important technique to reduce data dimensionality for the classification applications of hyperspectral remote sensing images (HSIs). Recently, searching-based BS methods have received increasing attention for their ability to select the best subset of bands while preserving the [...] Read more.
Hyperspectral band selection (BS) is an important technique to reduce data dimensionality for the classification applications of hyperspectral remote sensing images (HSIs). Recently, searching-based BS methods have received increasing attention for their ability to select the best subset of bands while preserving the essential information of the original data. However, existing searching-based BS methods neglect effective exploitation of the spatial and spectral prior information inherent in the data, thus limiting their performance. To address this problem, in this study, a novel unsupervised BS method called Spectral–Spatial Iterative Greedy Algorithm (SSIGA) is proposed. Specifically, to facilitate efficient local search using spectral information, SSIGA conducts clustering on all the bands by employing a K-means clustering method with balanced cluster size constraints and constructs a K-nearest neighbor graph for each cluster. Based on the nearest neighbor graphs, SSIGA can effectively explore the neighborhood solutions in local search. In addition, to efficiently evaluate the discriminability and information redundancy of the solution given by SSIGA using the spatial and spectral information of HSIs, we designed an effective objective function for SSIGA. The value of the objective function is derived by calculating the Fisher score for each band in the solution based on the results of the superpixel segmentation performed on the target HSI, as well as by computing the average information entropy and mutual information of the bands in the solution. Experimental results on three publicly available real HSI datasets demonstrate that the SSIG algorithm achieves superior performance compared to several state-of-the-art methods. Full article
(This article belongs to the Section Sensing and Imaging)
Show Figures

Figure 1

21 pages, 3192 KB  
Article
Unsupervised Structural Defect Classification via Real-Time and Noise-Robust Method in Smartphone Small Modules
by Sehun Lee, Taehoon Kim, Sookyun Kim, Junho Ahn and Namgi Kim
Electronics 2025, 14(17), 3455; https://doi.org/10.3390/electronics14173455 - 29 Aug 2025
Viewed by 864
Abstract
Demand for OIS (Optical Image Stabilization) actuator modules, developed for shake correction technologies in industries such as smartphones, drones, IoT, and AR/VR, is increasing. To enable real-time and precise inspection of these modules, an AI algorithm that maximizes defect detection accuracy is required. [...] Read more.
Demand for OIS (Optical Image Stabilization) actuator modules, developed for shake correction technologies in industries such as smartphones, drones, IoT, and AR/VR, is increasing. To enable real-time and precise inspection of these modules, an AI algorithm that maximizes defect detection accuracy is required. This study proposes an unsupervised learning-based algorithm that is robust to noise and capable of real-time processing for accurate defect classification of OIS actuators in a smart factory environment. The proposed algorithm performs noise-reduction preprocessing, considering the sensitivity of small components and lighting imbalances, and defines three dynamic Regions of Interest (ROIs) to address positional deviations. A customized AutoEncoder (AE) is trained for each ROI, and defect classification is conducted based on reconstruction errors, followed by a final comprehensive decision. Experimental results show that the algorithm achieves an accuracy of 0.9944 and an F1 score of 0.9971 using only a camera without the need for expensive sensors. Furthermore, it demonstrates an average processing time of 2.79 ms per module, ensuring real-time capability. This study contributes to precise quality inspection in smart factories by proposing a robust and scalable unsupervised inspection algorithm. Full article
(This article belongs to the Special Issue Advances in Intelligent Systems and Networks, 2nd Edition)
Show Figures

Figure 1

Back to TopTop