Sign in to use this feature.

Years

Between: -

Subjects

remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline

Journals

remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline

Article Types

Countries / Regions

remove_circle_outline
remove_circle_outline
remove_circle_outline

Search Results (297)

Search Parameters:
Keywords = unsupervised change detection

Order results
Result details
Results per page
Select all
Export citation of selected articles as:
26 pages, 1653 KB  
Article
Hybrid Deep Learning Framework with Cat Swarm Optimization for Cloud-Based Financial Fraud Detection
by Yong Qu and Zengtao Wang
Mathematics 2026, 14(8), 1355; https://doi.org/10.3390/math14081355 - 17 Apr 2026
Abstract
Financial fraud is still one of the most important threats to the financial industry, causing enormous economic losses and mounting difficulties for conventional fraud detection systems. The systems tend to face challenges in dealing with the rising amount of transactional data, the problem [...] Read more.
Financial fraud is still one of the most important threats to the financial industry, causing enormous economic losses and mounting difficulties for conventional fraud detection systems. The systems tend to face challenges in dealing with the rising amount of transactional data, the problem of class imbalance, and the continually changing nature of fraudulent activity. In order to solve these problems, in this research a cloud hybrid framework for detecting fraud using Long Short-Term Memory (LSTM) networks, Autoencoders, and Cat Swarm Optimization (CSO) is suggested. The purpose of the suggested framework is to provide improved detection performance and flexibility on a benchmark financial dataset, with a design intended to support scalability in real-time applications. The framework uses the Credit Card Fraud Detection Dataset from Kaggle, which consists primarily of numerical features, including anonymized variables (V1–V28), along with time and amount. The LSTM networks learn the sequential relationships of transactions, while Autoencoders learn to detect anomalies in the data unsupervised. CSO is used to optimize key hyperparameters of the hybrid model, including the learning rate (0.0001–0.01), batch size (32–128), number of LSTM layers (1–3), number of hidden units per layer (16–128), dropout rate (0.1–0.5), and fusion weights (0–1 for each weight, with the sum constrained to 1) between the LSTM and Autoencoder outputs. In addition, CSO is applied for feature subset selection and threshold tuning to further enhance model performance. Preprocessing is performed on the data, including normalization and feature scaling prior to model training. The suggested framework has a 96.2% accuracy, 94.6% precision, 97.9% recall, 96.2% F1-score, and 0.97 AUC-ROC, showing improved performance compared to CNN-based and LSTM-CNN models under the evaluated conditions. However, since no multiple experiments were conducted to verify the robustness, the results should be interpreted as indicative rather than definitive. The framework exhibits competitive fraud detection performance on the evaluated benchmark dataset, particularly in handling class imbalance. In a simulated environment configured to mimic cloud-like conditions, the framework achieved inference latency between 15 and 30 ms, GPU utilization between 60% and 70%, and a data transfer volume of approximately 1.5 GB per day, suggesting its potential for deployment in cloud-based fraud detection systems. The framework indicates immense potential for cloud deployment, with a robust solution for preventing financial fraud. The proposed framework demonstrates the potential of integrating sequential modeling, anomaly detection, and metaheuristic optimization within a unified and cloud-oriented architecture, providing a more comprehensive approach compared to conventional hybrid models. Full article
25 pages, 2303 KB  
Article
Modeling Structural Deviation in 10-K Risk Factors: A Semantic Anomaly Detection and Explainable AI Approach
by Fang Sun, Shuangjiang He, Ruiqi Wang, Lingyun Ke, Hongyu Shen and Qiuyue Liao
Risks 2026, 14(4), 87; https://doi.org/10.3390/risks14040087 - 13 Apr 2026
Viewed by 117
Abstract
This study presents an exploratory methodological framework for examining structural changes in regulatory risk disclosure using sentence embeddings, multivariate anomaly detection, and explainable artificial intelligence. Prior research typically relies on dictionary-based word frequencies, tone indicators, or topic proportions to quantify risk disclosure. While [...] Read more.
This study presents an exploratory methodological framework for examining structural changes in regulatory risk disclosure using sentence embeddings, multivariate anomaly detection, and explainable artificial intelligence. Prior research typically relies on dictionary-based word frequencies, tone indicators, or topic proportions to quantify risk disclosure. While these measures capture disclosure intensity, they do not directly assess whether the internal semantic organization of risk narratives has shifted relative to historical patterns. We propose a structural semantic deviation framework that represents each company–year disclosure using thematic shares and embedding-based dispersion statistics and evaluates deviations from a historical baseline through unsupervised anomaly detection. Using Item 1A Risk Factors from Wells Fargo and JPMorgan Chase surrounding the 2016 regulatory shock as a focused two-firm case study, we show that traditional lexical metrics do not clearly isolate structural breaks, whereas embedding-based semantic trajectories reveal substantial narrative reconfiguration. Isolation-based modeling provides stable and discriminative anomaly scores in this setting, and SHAP decomposition highlights semantic distance, litigation emphasis, and disclosure contraction as important drivers of deviation in 2025 out-of-sample disclosures. These findings should be interpreted as methodological evidence rather than broad population-level claims. The study demonstrates how structural semantic modeling can be operationalized in regulatory disclosure analysis and provides a transparent framework that can be extended to larger panels and cross-industry settings in future research. Full article
Show Figures

Figure 1

34 pages, 19919 KB  
Article
Unsupervised Change Detection in Heterogeneous Remote Sensing Images via Dynamic Mask Guidance
by Paixin Xie, Gao Chen, Qingfeng Zhou, Xiaoyan Li and Jingwen Yan
Remote Sens. 2026, 18(7), 1022; https://doi.org/10.3390/rs18071022 - 29 Mar 2026
Viewed by 307
Abstract
Unsupervised change detection (CD) in heterogeneous remote sensing images is intrinsically difficult due to severe sensor-specific discrepancies. In the absence of ground truth, these discrepancies result in ambiguous optimization objectives that make it difficult for models to distinguish true land-cover changes from modality-driven [...] Read more.
Unsupervised change detection (CD) in heterogeneous remote sensing images is intrinsically difficult due to severe sensor-specific discrepancies. In the absence of ground truth, these discrepancies result in ambiguous optimization objectives that make it difficult for models to distinguish true land-cover changes from modality-driven pseudo-changes. To address these challenges, we propose MaskUCD, a novel unsupervised framework that reformulates heterogeneous CD as a dynamic mask-driven constraint scheduling problem. Fundamentally distinct from conventional strategies that enforce selective feature alignment, MaskUCD employs a spatially adaptive optimization mechanism. Specifically, the iteratively refined mask serves as a geometric reference to guide optimization. It enforces strict feature alignment in mask-unchanged regions to suppress modality-induced discrepancies, while simultaneously promoting feature divergence in mask-changed regions to emphasize semantic inconsistencies. In this way, explicit optimization objectives are established, together with an intrinsic interpretability constraint that guides the CD process. This strategy treats the mask as a structural guide for representation learning rather than a ground-truth reference, thereby avoiding error accumulation caused by directly using inaccurate masks as supervisory signals. To facilitate this optimization, we design a specialized asymmetric autoencoder with a hybrid encoder architecture, utilizing multi-scale frequency analysis and global context modeling to enhance feature representation capabilities. Consequently, this design enables the generation of refined and semantically consistent masks, which provide increasingly precise structural guidance, yielding converged and discriminative difference maps. Extensive experiments demonstrate that MaskUCD achieves state-of-the-art performance and superior robustness compared to existing advanced methods. Full article
Show Figures

Figure 1

31 pages, 3479 KB  
Article
MV-S2CD: A Modality-Bridged Vision Foundation Model-Based Framework for Unsupervised Optical–SAR Change Detection
by Yongqi Shi, Ruopeng Yang, Changsheng Yin, Yiwei Lu, Bo Huang, Yongqi Wen, Yihao Zhong and Zhaoyang Gu
Remote Sens. 2026, 18(6), 931; https://doi.org/10.3390/rs18060931 - 19 Mar 2026
Viewed by 424
Abstract
Unsupervised change detection (UCD) from heterogeneous bitemporal optical–SAR imagery is challenging due to modality discrepancy, speckle/illumination variations, and the absence of change annotations. We propose MV-S2CD, a vision foundation model (VFM)-based framework that learns a modality-bridged latent space and produces dense change maps [...] Read more.
Unsupervised change detection (UCD) from heterogeneous bitemporal optical–SAR imagery is challenging due to modality discrepancy, speckle/illumination variations, and the absence of change annotations. We propose MV-S2CD, a vision foundation model (VFM)-based framework that learns a modality-bridged latent space and produces dense change maps in a fully unsupervised manner. To robustly adapt pretrained VFM priors to heterogeneous inputs with minimal task-specific parameters, MV-S2CD incorporates lightweight modality-specific adapters and parameter-efficient low-rank adaptation (LoRA) in high-level layers. A shared projector embeds the two observations into a common geometry, enabling consistent cross-modal comparison and reducing sensor-induced domain shift. Building on the bridged representation, we design a dual-branch change reasoning module that decouples structure-sensitive cues from semantic-consistency cues: a structure pathway preserves fine boundaries and local variations, while a semantic-consistency pathway employs reliability gating and multi-scale context aggregation to suppress pseudo-changes caused by modality-specific nuisances and residual misregistration. For label-free optimization, we develop a difference-centric self-supervision scheme with two perturbation views and reliability-guided pseudo-partitioning, jointly enforcing pseudo-unchanged invariance, pseudo-changed/unchanged separability, and sparsity and edge-preserving regularization. Experiments on three heterogeneous optical–SAR benchmarks demonstrate that MV-S2CD consistently improves the Precision–Recall trade-off and achieves state-of-the-art performance among unsupervised baselines, while remaining backbone-flexible and efficient. Full article
Show Figures

Figure 1

29 pages, 2152 KB  
Article
Transformer-Autoencoder-Based Unsupervised Temporal Anomaly Detection for Network Traffic with Dual Prediction and Reconstruction
by Jieke Lu, Xinyi Yang, Yang Liu, Haoran Zuo, Feng Zhou, Tong Yu, Dengmu Liu, Tianping Deng and Lijun Luo
Appl. Sci. 2026, 16(4), 2143; https://doi.org/10.3390/app16042143 - 23 Feb 2026
Viewed by 605
Abstract
With the rapid growth of large-scale networks, traditional rule-based and supervised anomaly detection methods struggle with heavy reliance on labeled data, slow response to rapidly changing patterns, and difficulty in capturing complex temporal anomalies. At the same time, real-world traffic exhibits strong class [...] Read more.
With the rapid growth of large-scale networks, traditional rule-based and supervised anomaly detection methods struggle with heavy reliance on labeled data, slow response to rapidly changing patterns, and difficulty in capturing complex temporal anomalies. At the same time, real-world traffic exhibits strong class imbalance, where normal samples overwhelmingly dominate, causing many existing models to miss subtle but critical abnormal behaviors. To address these challenges, this paper proposes an unsupervised temporal anomaly detection framework for network traffic based on a Transformer-autoencoder bidirectional prediction and reconstruction model. The framework combines the advantages of autoencoders and regression models, using multi-head self-attention and positional encoding to capture long-range temporal dependencies in traffic sequences. A masked decoding mechanism is further employed to prevent information leakage from future time steps. The model jointly generates forward and backward predictions as well as reconstructed sequences, and designs multiple anomaly scoring strategies that integrate prediction and reconstruction errors to enhance the sensitivity to point, contextual, and collective anomalies under highly imbalanced data. Experiments on three public benchmark datasets demonstrate that the proposed method significantly improves detection performance, achieving up to an F1 score of 0.960 and a precision of 0.949, with recall approaching 1.0, while reducing false alarms, thereby showing strong applicability to practical network security scenarios. Full article
(This article belongs to the Special Issue Deep Learning and Its Applications in Natural Language Processing)
Show Figures

Figure 1

25 pages, 1563 KB  
Article
BERT-LogAnom: Enhancing Log Anomaly Detection with Gated Residual BiLSTM and Dynamic Thresholding
by Xi Lu, Shufan An, Jingmei Chen, Zhan Shu, Weiping Wang, Runyi Qi and Yapeng Diao
Electronics 2026, 15(4), 806; https://doi.org/10.3390/electronics15040806 - 13 Feb 2026
Viewed by 470
Abstract
As modern software systems continue to grow in scale and structural complexity, log anomaly detection has become an essential component of system monitoring and fault diagnosis. However, existing approaches often struggle to adequately capture sequential dependencies in log data and to remain robust [...] Read more.
As modern software systems continue to grow in scale and structural complexity, log anomaly detection has become an essential component of system monitoring and fault diagnosis. However, existing approaches often struggle to adequately capture sequential dependencies in log data and to remain robust under distributional changes. To mitigate these issues, this paper presents BERT-LogAnom, an unsupervised framework for log anomaly detection that combines contextual representation learning, sequential modeling, and adaptive decision mechanisms. Specifically, a BERT-based encoder is employed to learn global contextual semantics from log sequences, while a gated residual bidirectional Long Short-Term Memory (GR-BiLSTM) network is introduced to model bidirectional temporal dependencies without disrupting the learned contextual information. To characterize normal system behavior from unlabeled logs, two self-supervised objectives—masked log key prediction and volume hypersphere minimization—are jointly optimized during training. Furthermore, a Dynamic Thresholding Prediction Module (DTPM) is incorporated to adjust anomaly decision boundaries in response to short-term statistical fluctuations and longer-term distribution drift. Experiments conducted on three public benchmark datasets (HDFS, BGL, and Thunderbird) show that BERT-LogAnom achieves consistently superior performance compared with representative baseline methods across precision, recall, and F1-score. Additional ablation studies further confirm the contribution of each major component in the proposed framework. Full article
(This article belongs to the Section Computer Science & Engineering)
Show Figures

Figure 1

31 pages, 11526 KB  
Review
Transferability and Robustness in Proximal and UAV Crop Imaging
by Jayme Garcia Arnal Barbedo
Agronomy 2026, 16(3), 364; https://doi.org/10.3390/agronomy16030364 - 2 Feb 2026
Cited by 1 | Viewed by 493
Abstract
AI-driven imaging is becoming central to crop monitoring, with proximal and unmanned aerial vehicle (UAV) platforms now routinely used for disease and stress detection, yield estimation, canopy structure, and fruit counting. Yet, as these models move from plots to farms, the main bottleneck [...] Read more.
AI-driven imaging is becoming central to crop monitoring, with proximal and unmanned aerial vehicle (UAV) platforms now routinely used for disease and stress detection, yield estimation, canopy structure, and fruit counting. Yet, as these models move from plots to farms, the main bottleneck is no longer raw accuracy but robustness under distribution shift. Systems trained in one field, season, cultivar, or sensor often fail when the scene, sensor, protocol, or timing changes in realistic ways. This review synthesizes recent advances on robustness and transferability in proximal and UAV imaging, drawing on a corpus of 42 core studies across field crops, orchards, greenhouse environments, and multi-platform phenotyping. Shift types are organized into four axes, namely scene, sensor, protocol, and time. The article also maps the empirical evidence on when RGB imaging alone is sufficient and when multispectral, hyperspectral, or thermal modalities can potentially improve robustness. This serves as a basis to synthesize acquisition and evaluation practices that often matter more than architectural tweaks, which include phenology-aware flight planning, radiometric standardization, metadata logging, and leave-one-field/season-out splits. Adaptation options are consolidated into a practical symptom/remedy roadmap, ranging from lightweight normalization and small target-set fine-tuning to feature alignment, unsupervised domain adaptation, style translation, and test-time updates. Finally, a benchmark and dataset agenda are outlined with emphasis on object-oriented splits, cross-sensor and cross-scale collections, and longitudinal datasets where the same fields are followed across seasons under different management regimes. The goal is to outline practices and evaluation protocols that support progress toward deployable and auditable systems, noting that such claims require standardized out-of-distribution testing and transparent reporting as emphasized in the benchmark specification and experiment suite proposed here. Full article
Show Figures

Figure 1

19 pages, 8143 KB  
Article
300-GHz Photonics-Aided Wireless 2 × 2 MIMO Transmission over 200 m Using GMM-Enhanced Duobinary Unsupervised Adaptive CNN
by Luhan Jiang, Jianjun Yu, Qiutong Zhang, Wen Zhou and Min Zhu
Sensors 2026, 26(3), 842; https://doi.org/10.3390/s26030842 - 27 Jan 2026
Viewed by 491
Abstract
Terahertz wireless communication offers ultra-high bandwidth, enabling an extremely high data rate for next-generation networks. However, it faces challenges including severe propagation loss and atmospheric absorption, which limits the transmission rate and transmission distance. To address the problem, polarization division multiplexing (PDM) and [...] Read more.
Terahertz wireless communication offers ultra-high bandwidth, enabling an extremely high data rate for next-generation networks. However, it faces challenges including severe propagation loss and atmospheric absorption, which limits the transmission rate and transmission distance. To address the problem, polarization division multiplexing (PDM) and antenna diversity techniques are utilized in this work to increase system capacity without changing the bandwidth of transmitted signals. Meanwhile, duobinary shaping is used to solve the problem of bandwidth limitation of components in the system, and the final duobinary signals are recovered by maximum likelihood sequence detection (MLSD). A Gaussian mixture model (GMM)-enhanced duobinary unsupervised adaptive convolutional neural network (DB-UACNN) is proposed, to further deal with channel noise. Based on the technologies above, a 2 × 2 multiple-input multiple-output (MIMO) photonic-aided terahertz wireless transmission system at 300 GHz is demonstrated. Experimental results have proved that the signal-to-noise ratio (SNR) gain of duobinary shaping is up to 1.87 dB and 1.70 dB in X-polarization and Y-polarization. The proposed GMM-enhanced DB-UACNN also shows extra SNR gain of up to 2.59 dB and 2.63 dB in X-polarization and Y-polarization, compared to the conventional duobinary filter. The high transmission rate of 100 Gbit/s over the distance of 200 m is finally realized under a 7% hard-decision forward error correction (HD-FEC) threshold. Full article
Show Figures

Figure 1

26 pages, 712 KB  
Article
Comparing Multi-Scale and Pipeline Models for Speaker Change Detection
by Alymzhan Toleu, Gulmira Tolegen and Bagashar Zhumazhanov
Acoustics 2026, 8(1), 5; https://doi.org/10.3390/acoustics8010005 - 25 Jan 2026
Viewed by 935
Abstract
Speaker change detection (SCD) in long, multi-party meetings is essential for diarization, Automatic speech recognition (ASR), and summarization, and is now often performed in the space of pre-trained speech embeddings. However, unsupervised approaches remain dominant when timely labeled audio is scarce, and their [...] Read more.
Speaker change detection (SCD) in long, multi-party meetings is essential for diarization, Automatic speech recognition (ASR), and summarization, and is now often performed in the space of pre-trained speech embeddings. However, unsupervised approaches remain dominant when timely labeled audio is scarce, and their behavior under a unified modeling setup is still not well understood. In this paper, we systematically compare two representative unsupervised approaches on the multi-talker audio meeting corpus: (i) a clustering-based pipeline that segments and clusters embeddings/features and scores boundaries via cluster changes and jump magnitude, and (ii) a multi-scale jump-based detector that measures embedding discontinuities at several window lengths and fuses them via temporal clustering and voting. Using a shared front-end and protocol, we vary the underlying features (ECAPA, WavLM, wav2vec 2.0, MFCC, and log-Mel) and test the model’s robustness under additive noise. The results show that embedding choice is crucial and that the two methods offer complementary trade-offs: the pipeline yields low false alarm rates but higher misses, while the multi-scale detector achieves relatively high recall at the cost of many false alarms. Full article
Show Figures

Figure 1

27 pages, 5415 KB  
Article
Deep Learning-Based 3D Reconstruction for Defect Detection in Shipbuilding Sub-Assemblies
by Paula Arcano-Bea, Agustín García-Fischer, Pedro-Pablo Gómez-González, Francisco Zayas-Gato, José Luis Calvo-Rolle and Héctor Quintián
Sensors 2026, 26(2), 660; https://doi.org/10.3390/s26020660 - 19 Jan 2026
Viewed by 725
Abstract
Overshooting defects in shipbuilding subassemblies are essential to ensure the final product’s overall integrity and safety. In this work, we focus on the automatic detection of overshooting defects in simple and T-shaped sub-assemblies by employing reconstruction-based unsupervised learning on 3D point clouds. To [...] Read more.
Overshooting defects in shipbuilding subassemblies are essential to ensure the final product’s overall integrity and safety. In this work, we focus on the automatic detection of overshooting defects in simple and T-shaped sub-assemblies by employing reconstruction-based unsupervised learning on 3D point clouds. To this purpose, we implemented and compared four state-of-the-art architectures, including a Variational Autoencoder (VAE), FoldingNet, a Dynamic Graph CNN (DGCNN) autoencoder, and a PointNet++ autoencoder. These architectures were trained exclusively on defect-free samples, anticipating the possibility of overshooting defects occurring in different locations and with varying geometric patterns that are difficult to characterize explicitly in advance. Those defects are then identified by applying an Isolation Forest to the reconstruction error features, enabling fully unsupervised anomaly detection and allowing us to study how the detection performance changes with the contamination parameter. The results show that reconstruction-based anomaly detection on point clouds is a viable strategy for identifying defects in an industrial environment and the importance of choosing architectures that balance detection performance, stability across different geometries, and computational cost. Full article
(This article belongs to the Special Issue Feature Papers in Fault Diagnosis & Sensors 2025)
Show Figures

Figure 1

27 pages, 2554 KB  
Article
Resilient Anomaly Detection in Ocean Drifters with Unsupervised Learning, Deep Learning Models, and Energy-Efficient Recovery
by Claire Angelina Guo, Jiachi Zhao and Eugene Pinsky
Oceans 2026, 7(1), 5; https://doi.org/10.3390/oceans7010005 - 6 Jan 2026
Viewed by 1008
Abstract
Changes in climate and ocean pollution has prioritized monitoring of ocean surface behavior. Ocean drifters, which are floating sensors that record position and velocity, help track ocean dynamics. However, environmental events such as oil spills can cause abnormal behavior, making anomaly detection critical. [...] Read more.
Changes in climate and ocean pollution has prioritized monitoring of ocean surface behavior. Ocean drifters, which are floating sensors that record position and velocity, help track ocean dynamics. However, environmental events such as oil spills can cause abnormal behavior, making anomaly detection critical. Unsupervised learning, combined with deep learning and advanced data handling, is used to detect unusual behavior more accurately on the NOAA Global Drifter Program dataset, focusing on regions of the West Coast and the Mexican Gulf, for time periods spanning 2010 and 2024. Using Density-Based Spatial Clustering of Applications with Noise (DBSCAN), pseudo-labels of anomalies are generated to train both a one-dimensional Convolutional Neural Network (CNN) and a Long Short-Term Memory (LSTM) network. The results of the two models are then compared with bootstrapping with block shuffling, as well as 10 trials with bar chart summaries. The results show nuance, with models outperforming the other in different contexts. Between the four spatiotemporal domains, a difference in the increasing rate of anomalies is found, showing the relevance of the suggested pipeline. Beyond detection, data reliability and efficiency are addressed: a RAID-inspired recovery method reconstructs missing data, while delta encoding and gzip compression cut storage and transmission costs. This framework enhances anomaly detection, ensures reliable recovery, and reduces energy consumption, thereby providing a sustainable system for timely environmental monitoring. Full article
Show Figures

Figure 1

19 pages, 8178 KB  
Article
SpectralNet-Enabled Root Cause Analysis of Frequency Anomalies in Solar Grids Using μPMU
by Arnabi Modak, Maitreyee Dey, Preeti Patel and Soumya Prakash Rana
Energies 2026, 19(1), 268; https://doi.org/10.3390/en19010268 - 4 Jan 2026
Viewed by 527
Abstract
The rapid integration of solar power into distribution grids has intensified challenges related to frequency instability caused by fluctuating renewable generation. These unexpected frequency variations are difficult to capture using traditional or supervised methods because they emerge from nonlinear, rapidly changing inverter grid [...] Read more.
The rapid integration of solar power into distribution grids has intensified challenges related to frequency instability caused by fluctuating renewable generation. These unexpected frequency variations are difficult to capture using traditional or supervised methods because they emerge from nonlinear, rapidly changing inverter grid interactions and often lack labelled examples. To address this, the present work introduces a unique, frequency-centric framework for unsupervised detection and root cause analysis of grid anomalies using high-resolution micro-Phasor Measurement Unit (μPMU) data. Unlike previous studies that focus primarily on voltage phasors or rely on predefined event labels, this work employs SpectralNet, a deep spectral clustering approach, integrated with autoencoder-based feature learning to model the nonlinear interactions between frequency, ROCOF, voltage, and current. These methods are particularly effective for unexpected frequency variations because they learn intrinsic, hidden structures directly from the data and can group abnormal frequency behavior without prior knowledge of event types. The proposed model autonomously identifies distinct root causes such as unbalanced loads, phase-specific faults, and phase imbalances behind hazardous frequency deviations. Experimental validation on a real solar-integrated distribution feeder in the UK demonstrates that the framework achieves superior cluster compactness and interpretability compared to traditional methods like K-Means, GMM, and Fuzzy C-Means. The findings highlight SpectralNet’s capability to uncover subtle, nonlinear patterns in μPMU data, offering an adaptive, data-driven tool for enhancing grid stability and situational awareness in renewable-rich power systems. Full article
Show Figures

Figure 1

15 pages, 963 KB  
Article
Development and Validation of a Targeted Metabolomic Tool for Metabotype Classification in Schoolchildren
by Sheyla Karina Hernández-Ramírez, Diego Arturo Velázquez-Trejo, Eduardo Sandoval-Colín, Cristóbal Fresno, Mariana Flores-Torres, Ernestina Polo-Oteyza, María José Garcés-Hernández, Nayely Garibay-Nieto, Isabel Ibarra-González, Marcela Vela-Amieva, Guadalupe Estrada-Gutierrez and Felipe Vadillo-Ortega
Metabolites 2026, 16(1), 44; https://doi.org/10.3390/metabo16010044 - 4 Jan 2026
Viewed by 766
Abstract
Background: Metabolomic profiling can uncover metabolic differences among seemingly healthy children, providing opportunities for personalized medicine and early detection of risk biomarkers for future metabolic disorders. This study aimed to identify and internally validate metabotypes in apparently healthy schoolchildren using targeted serum metabolomics [...] Read more.
Background: Metabolomic profiling can uncover metabolic differences among seemingly healthy children, providing opportunities for personalized medicine and early detection of risk biomarkers for future metabolic disorders. This study aimed to identify and internally validate metabotypes in apparently healthy schoolchildren using targeted serum metabolomics and to assess the external validity of this metabotype classification tool in two separate groups of children. Methods: Data from schoolchildren aged 6–11 years were analyzed in two phases. In the first phase, we developed and validated a classification tool using targeted serum metabolomics in healthy children. Metabotypes were identified through unsupervised clustering with a self-organizing map, followed by assessment of cluster stability and classification accuracy. In the second phase, we tested the tool’s consistency by applying it to two additional groups: the same children from phase 1 after a 10-month physical activity intervention, and a separate group diagnosed with metabolic syndrome. Results: Three metabotypes were identified in healthy children: METBA (balanced profile), METLI (high lipid and glucose levels), and METAA (high amino acid levels). Internal validation showed strong cluster stability (ARI = 0.79) and high classification accuracy (0.95). After the intervention, 55% of children were reclassified, indicating diverse metabolic responses to physical activity. Among children with metabolic syndrome, 83% were classified as METLI and 13% as METAA. Conclusions. This tool revealed serum metabolomic diversity, enabling classification of healthy children into three distinct metabotypes. It also detects changes in metabotype classification associated with a physical activity intervention and identifies the majority of children diagnosed with metabolic syndrome within two groups. This supports the potential use of metabotypes as biomarkers and eventually for personalized interventions. Full article
(This article belongs to the Special Issue Proteomics and Metabolomics in Human Health and Disease)
Show Figures

Graphical abstract

23 pages, 2700 KB  
Article
Elevated SASP Factors, Reduced Antioxidant Enzymes, and Increased Tumor Susceptibility in Space Radiation-Exposed ApcMin/+ Mice
by Kamendra Kumar, Jerry Angdisen, Albert J. Fornace and Shubhankar Suman
Int. J. Mol. Sci. 2026, 27(1), 211; https://doi.org/10.3390/ijms27010211 - 24 Dec 2025
Viewed by 730
Abstract
Human missions into deep space will expose astronauts to the unique and complex radiation environment of galactic cosmic radiation (GCR), a mixed field of high-energy protons and heavy ions predicted to substantially increase long-term cancer risk. To support effective risk stratification, early detection, [...] Read more.
Human missions into deep space will expose astronauts to the unique and complex radiation environment of galactic cosmic radiation (GCR), a mixed field of high-energy protons and heavy ions predicted to substantially increase long-term cancer risk. To support effective risk stratification, early detection, and mitigation strategies, there is a need to identify biomarkers indicative of GCR-induced cancer risk. Here, we applied a Tandem Mass Tag (TMT)-based quantitative proteomics approach to identify potential biomarkers associated with GCR-induced gastrointestinal (GI) and mammary tumorigenesis using the female ApcMin/+ mouse, a well-established model of human colorectal and breast cancer. Eight- to ten-week-old ApcMin/+ mice were exposed to 75 cGy of simulated GCR and serum and tissue samples were collected 100–110 days post-exposure for molecular and histopathological analyses. Tumor incidence was scored by blinded observers, and serum proteomes exhibiting a fold change > 1.2 or <0.83 with p < 0.05 were considered significantly altered. Bioinformatics analyses, including Gene Ontology, Kyoto Encyclopedia of Genes and Genomes pathway enrichment, and unsupervised clustering, were employed to delineate GCR-responsive molecular networks. Validation of differentially expressed proteins (DEPs) was performed using immunoblotting, ELISA, and enzyme activity assays. GCR exposure resulted in a significant increase in both GI and mammary tumor burden relative to controls. Proteomic profiling revealed 194 upregulated and 461 downregulated proteins, distinguishing GCR-exposed from control serum proteomes. Functional enrichment analyses highlighted alterations in metabolic processes, PI3K-AKT, HIF-1, and PPAR signaling pathways, alongside the suppression of antioxidant defense mechanisms. Notably, mice exposed to GCR exhibited elevated serum levels of TGF-β1 and MMP9, accompanied by reduced levels and enzymatic activities of key antioxidant defenses. Cross-referencing 36 GCR-induced serum SASP factors with the Human Protein Atlas revealed 11 SASP proteins associated with human breast and colorectal cancers. Together, these findings show that GCR exposure triggers a pro-tumorigenic serum proteomic signature that may serve as a biomarker for assessing cancer risk in astronauts during deep-space missions. Full article
(This article belongs to the Section Molecular Biology)
Show Figures

Figure 1

27 pages, 3720 KB  
Article
The Threshold of Soil Organic Carbon and Topography Reveal Degradation Patterns in Brazilian Pastures: Evidence from Rio de Janeiro State
by Fernando Arão Bila Junior, Fernando António Leal Pacheco, Carlos Alberto Valera, Adriana Monteiro da Costa, Maria de Lourdes Mendonça-Santos, Luís Filipe Sanches Fernandes and João Paulo Moura
Sustainability 2025, 17(23), 10764; https://doi.org/10.3390/su172310764 - 1 Dec 2025
Viewed by 910
Abstract
Soil organic carbon (SOC) is a key indicator for assessing pasture degradation. This study presents an integrated, field-based approach to analyzing SOC dynamics in pastures of Rio de Janeiro state (Brazil). Unlike methods based exclusively on remote sensing or modeling, our analysis is [...] Read more.
Soil organic carbon (SOC) is a key indicator for assessing pasture degradation. This study presents an integrated, field-based approach to analyzing SOC dynamics in pastures of Rio de Janeiro state (Brazil). Unlike methods based exclusively on remote sensing or modeling, our analysis is based on 350 georeferenced soil samples collected by Embrapa Solos and complemented by historical land use data, providing robust and reliable empirical evidence. Statistical methods (ANOVA, Tukey test), geostatistical interpolation (kriging), and unsupervised clustering (k-means) were used to characterize the spatiotemporal distribution of SOC. The results revealed patterns linked to both topographic and anthropogenic drivers, enabling the objective delineation of degraded versus non-degraded pastures. SOC levels below 40 g/kg in areas under 300 m elevation were strongly associated with degradation due to intensive use. In contrast, degradation at higher altitudes was primarily linked to sloping terrain more prone to water erosion. This methodological approach demonstrates the potential of combining field data with data mining tools to detect degradation patterns and inform targeted land management. The findings reaffirm SOC as a vital indicator of soil quality and highlight the importance of sustainable pasture practices in conserving carbon stocks and mitigating climate change. The proposed threshold-based method offers a practical foundation for diagnosing degraded pastures and identifying priority areas for restoration. Full article
Show Figures

Figure 1

Back to TopTop