Sign in to use this feature.

Years

Between: -

Subjects

remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline

Journals

remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline

Article Types

Countries / Regions

remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline

Search Results (480)

Search Parameters:
Keywords = manifold learning

Order results
Result details
Results per page
Select all
Export citation of selected articles as:
12 pages, 1020 KB  
Article
Dimensionality Reduction and Machine Learning Methods for COVID-19 Classification Using Chest CT Images
by Alexandra Isabella Somodi, Akul Sharma, Alexis Bennett and Dominique Duncan
Electronics 2026, 15(6), 1235; https://doi.org/10.3390/electronics15061235 - 16 Mar 2026
Abstract
During the COVID-19 pandemic, researchers have made efforts to detect COVID-19 through various methods. In the dataset used for this study, COVID-19 patients were identified using chest computed tomography (CT) images. High dimensionality is frequently an issue in machine learning image classification. Accordingly, [...] Read more.
During the COVID-19 pandemic, researchers have made efforts to detect COVID-19 through various methods. In the dataset used for this study, COVID-19 patients were identified using chest computed tomography (CT) images. High dimensionality is frequently an issue in machine learning image classification. Accordingly, this study implemented three dimensionality reduction methods in combination with various machine learning algorithms for improved classification. Principal component analysis (PCA), uniform manifold approximation and projection (UMAP), and diffusion maps were applied to the dataset to extract the most important features of the chest CT images. The extracted features were given as input either to logistic regression or the extreme gradient boosting (XGBoost) algorithm to perform classification. The strongest model identified from this study was diffusion maps in combination with logistic regression. This model, evaluated against existing models from similar studies in recent years, yielded strong performance for detecting COVID-19 cases using chest CT images. Our proposed model achieved 97.35% accuracy, 92.16% sensitivity, and 98.59% specificity on the held-out test set in differentiating between COVID-19-positive cases and healthy, non-COVID-19 cases. This study aimed to detect COVID-19 without the use of viral testing. Importantly, this method could assist clinicians in making an initial diagnosis, especially when viral testing is not available or timely enough for the patient’s case. This study also provides deeper insight into various dimensionality reduction methods and how compatible they are with biomedical imaging data. Models were trained using stratified cross-validation on the training set, with final performance evaluated on a held-out test set at the patient level to prevent data leakage. Additional imbalance-aware metrics were used to assess robustness given class distribution differences. Full article
(This article belongs to the Special Issue Advances in Machine Learning for Image Classification)
Show Figures

Figure 1

22 pages, 526 KB  
Review
Learning Nonlinear Motor Control: How Integrating Machine Learning and Nonlinear Dynamics Reveals Structure, Adaptation, and Control in Human Movement
by Armin Hakkak Moghadam Torbati, Yavar Shiravand and Armin Mazinani
Actuators 2026, 15(3), 166; https://doi.org/10.3390/act15030166 - 16 Mar 2026
Abstract
Human movement emerges from complex interactions between neural processes, musculoskeletal dynamics, and environmental constraints, resulting in behavior that is inherently nonlinear. Therefore, nonlinear dynamical systems approaches have been widely used to characterize variability, stability, and coordination in motor behavior. However, despite their conceptual [...] Read more.
Human movement emerges from complex interactions between neural processes, musculoskeletal dynamics, and environmental constraints, resulting in behavior that is inherently nonlinear. Therefore, nonlinear dynamical systems approaches have been widely used to characterize variability, stability, and coordination in motor behavior. However, despite their conceptual value, these methods are often applied post hoc and remain limited in their ability to support prediction, control, and integration of high-dimensional multimodal data. Artificial intelligence (AI) provides a complementary modeling framework capable of addressing these limitations. Yet many current AI applications treat motor signals primarily as feature sets for classification or regression, leaving the underlying dynamical structure of movement underexplored. This review synthesizes recent research that integrates AI with nonlinear motor control analysis to model, interpret, and control human movement across neural, biomechanical, and behavioral domains. We organize related studies according to the type of nonlinear motor control problem addressed, including input–output mappings, temporal dynamics, and adaptive control policies under conditions of partial observability and nonstationarity. Across these examples, we show that AI becomes scientifically informative when constrained and evaluated by nonlinear dynamical constructs such as attractors, phase relationships, manifolds, and stability structures. Finally, we discuss current limitations and outline future directions toward theory-informed, explainable, and closed-loop AI models for motor control and human–actuator interaction. Full article
(This article belongs to the Special Issue Analysis and Design of Linear/Nonlinear Control System—2nd Edition)
Show Figures

Figure 1

31 pages, 23615 KB  
Article
A Memory-Efficient Class-Incremental Learning Framework for Remote Sensing Scene Classification via Feature Replay
by Yunze Wei, Yuhan Liu, Ben Niu, Xiantai Xiang, Jingdun Lin, Yuxin Hu and Yirong Wu
Remote Sens. 2026, 18(6), 896; https://doi.org/10.3390/rs18060896 - 15 Mar 2026
Abstract
Most existing deep learning models for remote sensing scene classification (RSSC) adopt an offline learning paradigm, where all classes are jointly optimized on fixed-class datasets. In dynamic real-world scenarios with streaming data and emerging classes, such paradigms are inherently prone to catastrophic forgetting [...] Read more.
Most existing deep learning models for remote sensing scene classification (RSSC) adopt an offline learning paradigm, where all classes are jointly optimized on fixed-class datasets. In dynamic real-world scenarios with streaming data and emerging classes, such paradigms are inherently prone to catastrophic forgetting when models are incrementally trained on new data. Recently, a growing number of class-incremental learning (CIL) methods have been proposed to tackle these issues, some of which achieve promising performance by rehearsing training data from previous tasks. However, implementing such strategy in real-world scenarios is often challenging, as the requirement to store historical data frequently conflicts with strict memory constraints and data privacy protocols. To address these challenges, we propose a novel memory-efficient feature-replay CIL framework (FR-CIL) for RSSC that retains compact feature embeddings, rather than raw images, as exemplars for previously learned classes. Specifically, a progressive multi-scale feature enhancement (PMFE) module is proposed to alleviate representation ambiguity. It adopts a progressive construction scheme to enable fine-grained and interactive feature enhancement, thereby improving the model’s representation capability for remote sensing scenes. Then, a specialized feature calibration network (FCN) is trained in a transductive learning paradigm with manifold consistency regularization to adapt stored feature descriptors to the updated feature space, thereby effectively compensating for feature space drift and enabling a unified classifier. Following feature calibration, a bias rectification (BR) strategy is employed to mitigate prediction bias by exclusively optimizing the classifier on a balanced exemplar set. As a result, this memory-efficient CIL framework not only addresses data privacy concerns but also mitigates representation drift and classifier bias. Extensive experiments on public datasets demonstrate the effectiveness and robustness of the proposed method. Notably, FR-CIL outperforms the leading state-of-the-art CIL methods in mean accuracy by margins of 3.75%, 3.09%, and 2.82% on the six-task AID, seven-task RSI-CB256, and nine-task NWPU-45 datasets, respectively. At the same time, it reduces memory storage requirements by over 94.7%, highlighting its strong potential for real-world RSSC applications under strict memory constraints. Full article
Show Figures

Figure 1

19 pages, 1198 KB  
Article
GSMTNet: Dual-Stream Video Anomaly Detection via Gated Spatio-Temporal Graph and Multi-Scale Temporal Learning
by Di Jiang, Huicheng Lai, Guxue Gao, Dan Ma and Liejun Wang
Electronics 2026, 15(6), 1200; https://doi.org/10.3390/electronics15061200 - 13 Mar 2026
Viewed by 122
Abstract
Video Anomaly Detection aims to identify video segments containing abnormal events. However, detecting anomalies relies more heavily on temporal modeling, particularly when anomalies exhibit only subtle deviations from normal events. However, most existing methods inadequately model the heterogeneity in spatiotemporal relationships, especially the [...] Read more.
Video Anomaly Detection aims to identify video segments containing abnormal events. However, detecting anomalies relies more heavily on temporal modeling, particularly when anomalies exhibit only subtle deviations from normal events. However, most existing methods inadequately model the heterogeneity in spatiotemporal relationships, especially the dynamic interactions between human pose and video appearance. To address this, we propose GSMTNet, a dual-stream heterogeneous unsupervised network integrating gated spatio-temporal graph convolution and multi-scale temporal learning. First, we introduce a dynamic graph structure learning module, which leverages gated spatio-temporal graph convolutions with manifold transformations to model latent spatial relationships via human pose graphs. This is coupled with a normalizing flow-based density estimation module to model the probability distribution of normal samples in a latent space. Second, we design a hybrid dilated temporal module that employs multi-scale temporal feature learning to simultaneously capture long- and short-term dependencies, thereby enhancing the separability between normal patterns and potential deviations. Finally, we propose a dual-stream fusion module to hierarchically integrate features learned from pose graphs and raw video sequences, followed by a prediction head that computes anomaly scores from the fused features. Extensive experiments demonstrate state-of-the-art performance, achieving 86.81% AUC on ShanghaiTech and 70.43% on UBnormal, outperforming existing methods in rare anomaly scenarios. Full article
Show Figures

Figure 1

19 pages, 11709 KB  
Article
Dual-Manifold Contrastive Learning for Robust and Real-Time EEG Motor Decoding
by Chengsi Hu, Qing Liu, Chenying Xu, Guanglin Li and Yongcheng Li
Sensors 2026, 26(6), 1783; https://doi.org/10.3390/s26061783 - 12 Mar 2026
Viewed by 136
Abstract
Brain–computer interfaces (BCIs) have great potential for consumer electronics, as they enable the decoding of brain activity to control external devices and assist human–computer interaction. However, current decoding methods for BCIs face several challenges, such as low accuracy, poor stability under electrode shift, [...] Read more.
Brain–computer interfaces (BCIs) have great potential for consumer electronics, as they enable the decoding of brain activity to control external devices and assist human–computer interaction. However, current decoding methods for BCIs face several challenges, such as low accuracy, poor stability under electrode shift, and slow processing for real-time use. In this paper, we propose a hybrid decoding framework designed to address the challenges of current EEG decoding methods. Our method combines manifold learning with contrastive learning. The core of our method lies in a dual-manifold model that uses non-negative matrix factorization (NMF) and a contrastive manifold learning framework to extract clear and useful features from brain signals. To improve decoding stability, we introduce a joint training strategy that enhances feature learning. Furthermore, the system is optimized for real-time interaction, reducing the system latency to 100 ms. We collect EEG signals from 15 subjects performing motor execution tasks and 10 subjects performing motor imagery tasks to construct a motor EEG dataset. On this dataset, the proposed method achieves superior decoding performance, reaching F1-scores of 0.7382 for the motor imagery tasks and 0.8361 for the motor execution tasks. Furthermore, the method maintains robustness even with reduced electrode counts and altered spatial distributions, highlighting its potential as a decoding solution for reliable and portable BCI systems. Full article
(This article belongs to the Special Issue EEG Signal Processing Techniques and Applications—3rd Edition)
Show Figures

Figure 1

24 pages, 4319 KB  
Article
Integrative Population Analysis of MICA and MICB Using Unsupervised Machine Learning in a Large Histocompatibility Laboratory Cohort
by Luis Ramalhete, Paula Almeida, Ruben Araújo and Eduardo Espada
J 2026, 9(1), 8; https://doi.org/10.3390/j9010008 - 6 Mar 2026
Viewed by 166
Abstract
Background: Non-classical MHC class I molecules MICA and MICB are stress-inducible NKG2D ligands that contribute to immune surveillance, non-HLA antibody formation, and alloreactivity in solid organ and hematopoietic stem cell transplantation; population-level data for Southern Europe remain limited. Methods: High-resolution MICA and MICB [...] Read more.
Background: Non-classical MHC class I molecules MICA and MICB are stress-inducible NKG2D ligands that contribute to immune surveillance, non-HLA antibody formation, and alloreactivity in solid organ and hematopoietic stem cell transplantation; population-level data for Southern Europe remain limited. Methods: High-resolution MICA and MICB genotyping was performed in 1364 unrelated individuals from southern Portugal using a hybrid-capture next-generation sequencing workflow, and allele calls were analyzed with standard population-genetic metrics (allele and genotype frequencies, heterozygosity, Hardy–Weinsberg equilibrium, and LD-like D, D′, r2) and multilocus allele presence/absence encodings explored by k-means clustering, spectral clustering, principal component analysis, t-distributed stochastic neighbor embedding, and uniform manifold approximation and projection. Results: Forty-two MICA and twenty-two MICB alleles were identified; MICA*002:01, MICA*004:01, MICA*008:01, MICA*008:04 and MICB*002:01, MICB*004:01, MICB*005:02, MICB*008:01 were most frequent, and most individuals carried at least two distinct MICA and two distinct MICB allotypes. Co-occurrence and LD-like analyses revealed conserved MICA–MICB combinations, including a strong association between MICA*009:02 and MICB*005:06, while unsupervised analyses identified partially overlapping multilocus genotype backgrounds and recurrent four-allele constellations. Conclusions: These findings provide a detailed non-classical MHC reference for southern Portugal and a multilocus framework to support interpretation of non-HLA antibodies and MICA/MICB-aware donor evaluation in selected clinical scenarios, as well as the development of machine learning-based immunologic risk models. Full article
(This article belongs to the Special Issue Feature Papers of J—Multidisciplinary Scientific Journal in 2026)
Show Figures

Figure 1

28 pages, 10911 KB  
Article
Galaxy Evolution with Manifold Learning
by Tsutomu T. Takeuchi, Suchetha Cooray and Ryusei R. Kano
Entropy 2026, 28(3), 288; https://doi.org/10.3390/e28030288 - 3 Mar 2026
Viewed by 210
Abstract
Matter in the early Universe was nearly uniform, and galaxies emerged through the gravitational growth of small primordial density fluctuations. Astrophysics has been trying to unveil the complex physical phenomena that have caused the formation and evolution of galaxies throughout the 13-billion-year history [...] Read more.
Matter in the early Universe was nearly uniform, and galaxies emerged through the gravitational growth of small primordial density fluctuations. Astrophysics has been trying to unveil the complex physical phenomena that have caused the formation and evolution of galaxies throughout the 13-billion-year history of the Universe using the first principles of physics. However, since present-day astrophysical big data contain more than 100 explanatory variables, such a conventional methodology faces limits in dealing with such data. We, instead, elucidate the physics of galaxy evolution by applying manifold learning, one of the latest methods of data science, to a feature space spanned by galaxy luminosities and cosmic time. We discovered a low-dimensional nonlinear structure of data points in this space, referred to as the galaxy manifold. We found that the galaxy evolution in the ultraviolet–optical–near-infrared luminosity space is well described by two parameters, star formation and stellar mass evolution, on the manifold. We also discuss a possible way to connect the manifold coordinates to physical quantities. Full article
(This article belongs to the Section Astrophysics, Cosmology, and Black Holes)
Show Figures

Figure 1

19 pages, 2861 KB  
Article
A Channel-Independent Anchor Graph-Regularized Broad Learning System for Industrial Soft Sensors
by Zhiyi Zhang, Mingyi Yang, Cheng Xie, Zhigang Xu and Pengfei Yin
Entropy 2026, 28(3), 274; https://doi.org/10.3390/e28030274 - 28 Feb 2026
Viewed by 169
Abstract
To address the nonlinear dynamics and strong multivariate coupling inherent in complex industrial data, while overcoming the high computational costs and deployment challenges of deep learning, this paper proposes a Channel-Independent Anchor Graph-Regularized Broad Learning System (CI-GBLS). First, a Channel Independence (CI) strategy [...] Read more.
To address the nonlinear dynamics and strong multivariate coupling inherent in complex industrial data, while overcoming the high computational costs and deployment challenges of deep learning, this paper proposes a Channel-Independent Anchor Graph-Regularized Broad Learning System (CI-GBLS). First, a Channel Independence (CI) strategy is introduced: by constructing physically isolated feature channels, multivariate inputs are orthogonally decomposed, enabling the model to mine the intrinsic temporal evolutionary patterns of each variable. Building upon this, enhancement nodes are constructed using Radial Basis Functions (RBFs) to capture nonlinear dynamics; moreover, RBF cluster centers are reused as graph anchors to design an efficient manifold regularization algorithm. This algorithm embeds the intrinsic geometric structure of the data into the learning objective via reduced rank approximation, thereby guiding output weights to explicitly reconstruct spatial coupling relationships while preserving manifold consistency. Experimental results on the IndPenSim process demonstrate that CI-GBLS effectively balances prediction accuracy and efficiency. It completes training within seconds, validating its effectiveness for complex time-series data and offering an efficient solution for real-time, high-precision industrial modeling. Full article
(This article belongs to the Section Signal and Data Analysis)
Show Figures

Figure 1

32 pages, 16444 KB  
Article
BiFusion-LDSeg: A Latent Diffusion Framework with Bi-Directional Attention Fusion for Landslide Segmentation in Satellite Imagery
by Bingxin Shi, Hongmei Guo, Yin Sun, Jianyu Long, Li Yang, Yadong Zhou, Jingjing Jiao, Jingren Zhou, Yusen He and Huajin Li
Remote Sens. 2026, 18(5), 719; https://doi.org/10.3390/rs18050719 - 27 Feb 2026
Viewed by 173
Abstract
Rapid and accurate mapping of earthquake-triggered landslides from satellite imagery is critical for emergency response and hazard assessment, yet remains challenging due to irregular boundaries, extreme size variations, and atmospheric noise. This paper proposes BiFusion-LDSeg, a novel bi-directional fusion enhanced latent diffusion framework [...] Read more.
Rapid and accurate mapping of earthquake-triggered landslides from satellite imagery is critical for emergency response and hazard assessment, yet remains challenging due to irregular boundaries, extreme size variations, and atmospheric noise. This paper proposes BiFusion-LDSeg, a novel bi-directional fusion enhanced latent diffusion framework that synergistically combines CNN-Transformer architectures with generative diffusion models for robust landslide segmentation. The framework introduces three key innovations: (1) a dual-encoder with Bi-directional Attention Gates (Bi-AG) enabling sophisticated cross-modal feature calibration between local CNN textures and global Transformer context; (2) a conditional latent diffusion process operating in learned low-dimensional landslide shape manifolds, reducing computational complexity by 100× while enabling inference with only 10 sampling steps versus 1000+ in standard diffusion models; and (3) a boundary-aware progressive decoder employing multi-scale reverse attention mechanisms for precise boundary delineation. Comprehensive experiments on three earthquake datasets from Sichuan Province, China (Lushan Mw 7.0, Jiuzhaigou Mw 6.5, Luding Mw 6.8) demonstrate superior performance, outperforming state-of-the-art methods by 7–13% in IoU and 5–7% in DSC across all three datasets. The framework exhibits exceptional noise robustness, strong cross-dataset generalization, and inherent uncertainty quantification, enabling reliable deployment for post-earthquake landslide inventory mapping at regional scales. Full article
Show Figures

Figure 1

23 pages, 531 KB  
Article
Beacon-Aided Self-Calibration and Robust MVDR Beamforming for UAV Swarm Virtual Arrays Under Formation Drift and Low Snapshots
by Siming Chen, Xin Zhang, Shujie Li, Zichun Wang and Weibo Deng
Drones 2026, 10(3), 157; https://doi.org/10.3390/drones10030157 - 26 Feb 2026
Viewed by 288
Abstract
Unmanned aerial vehicle (UAV) swarms can form sparse virtual antenna arrays (VAAs) for airborne sensing and communications, but their beamforming performance is highly vulnerable to quasi-static formation drift and the limited number of snapshots available within each coherent processing interval. This paper proposes [...] Read more.
Unmanned aerial vehicle (UAV) swarms can form sparse virtual antenna arrays (VAAs) for airborne sensing and communications, but their beamforming performance is highly vulnerable to quasi-static formation drift and the limited number of snapshots available within each coherent processing interval. This paper proposes a beacon-aided self-calibration and robust beamforming framework for narrowband UAV-swarm uplinks in strong-interference, low-snapshot regimes. We consider one signal of interest (SOI) and multiple co-channel interferers characterized by their coarse direction-of-arrival (DOA) information. The key idea is to exploit a single dominant non-SOI emitter as a strong calibration source (beacon) to learn the quasi-static geometry drift from data. First, the beacon spatial signature is extracted from the sample covariance matrix via eigenvector–steering-vector alignment, and a correlation-based gate is used to decide whether geometry calibration is reliable. When the gate is passed, the inter-UAV position drift is estimated from element-wise steering ratios to build a calibrated array manifold. Second, using the calibrated steering vectors and coarse DOA information, the interference-plus-noise covariance matrix (INCM) is reconstructed through a low-dimensional non-negative power fitting with mild diagonal loading. Finally, a geometry-aware minimum-variance distortionless response (MVDR) beamformer is designed based on the reconstructed INCM. Simulations on coprime-inspired UAV formations with a single dominant interferer show that the proposed scheme recovers most of the SINR loss caused by geometry mismatch and consistently outperforms baseline MVDR, worst-case MVDR, a recent covariance-reconstruction baseline, and URGLQ in the low-snapshot regime. For example, in a representative setting with Nuav=7, σp=0.10, INRc=30 dB, and L=10, the proposed method achieves approximately 14 dB output SINR at SNRin=10 dB, outperforming nominal SCM-MVDR by about 13 dB and approaching a genie-aided MVDR bound within a few dB, while retaining a computational complexity comparable to standard MVDR. Full article
(This article belongs to the Special Issue Optimizing MIMO Systems for UAV Communication Networks)
Show Figures

Figure 1

11 pages, 899 KB  
Article
Quantum-Inspired Classical Convolutional Neural Network for Automated Bone Cancer Detection from X-Ray Images
by Naveen Joy, Sonet Daniel Thomas, Aparna Rajan, Lijin Varghese, Aswathi Balakrishnan, Amritha Thaikkad, Vidya Niranjan, Abhithaj Jayanandan and Rajesh Raju
Quantum Rep. 2026, 8(1), 19; https://doi.org/10.3390/quantum8010019 - 25 Feb 2026
Viewed by 314
Abstract
Accurate and early detection of bone cancer is critical for improving patient outcomes, yet conventional radiographic interpretation remains limited by subjectivity and variability. Conventional AI models often struggle with complex multi-modal noise distributions, non-convex and topologically entangled latent manifolds, extreme class imbalance in [...] Read more.
Accurate and early detection of bone cancer is critical for improving patient outcomes, yet conventional radiographic interpretation remains limited by subjectivity and variability. Conventional AI models often struggle with complex multi-modal noise distributions, non-convex and topologically entangled latent manifolds, extreme class imbalance in rare oncological conditions, and heterogeneous data fusion constraints. To address these challenges, we present a Quantum-Inspired Classical Convolutional Neural Network (QC-CNN) inspired by quantum analogies for automated bone cancer detection in radiographic images. The proposed architecture integrates classical convolutional layers for hierarchical feature extraction with a classical variational layer motivated by high-dimensional Hilbert space analogies for enhanced pattern discrimination. A curated and annotated dataset of bone X-ray images was utilized, partitioned into training, validation, and independent test cohorts. The QC-CNN was optimized using stochastic gradient descent (SGD) with adaptive learning rate scheduling, and regularization strategies were applied to mitigate overfitting. Quantitative evaluation demonstrated superior diagnostic performance, achieving high accuracy, precision, recall, F1-score, and area under the receiver operating characteristic curve (AUC). Results highlight the ability of classical CNN with quantum-inspired design to capture non-linear correlations and subtle radiographic biomarkers that classical CNNs may overlook. This study establishes QC-CNN as a promising framework for quantum-analogy motivated medical image analysis, providing evidence of its utility in oncology and underscoring its potential for translation into clinical decision-support systems for early bone cancer diagnosis. All computations in the present study are performed using classical algorithms, with quantum-inspired concepts serving as a conceptual framework for model design and motivating future extensions. Full article
Show Figures

Figure 1

54 pages, 2092 KB  
Article
Shared Autoencoder-Based Unified Intrusion Detection Across Heterogeneous Datasets for Binary and Multi-Class Classification Using a Hybrid CNN–DNN Model
by Hesham Kamal and Maggie Mashaly
Mach. Learn. Knowl. Extr. 2026, 8(2), 53; https://doi.org/10.3390/make8020053 - 22 Feb 2026
Viewed by 345
Abstract
As network environments become increasingly interconnected, ensuring robust cyber-security has become critical, particularly with the growing sophistication of modern cyber threats. Intrusion detection systems (IDSs) play a vital role in identifying and mitigating unauthorized or malicious activities; however, conventional machine learning-based IDSs often [...] Read more.
As network environments become increasingly interconnected, ensuring robust cyber-security has become critical, particularly with the growing sophistication of modern cyber threats. Intrusion detection systems (IDSs) play a vital role in identifying and mitigating unauthorized or malicious activities; however, conventional machine learning-based IDSs often rely on handcrafted features and are limited in their ability to detect diverse attack types across disparate network domains. To address these limitations, this paper introduces a novel unified intrusion detection framework that implements “Structural Dualism” to integrate three heterogeneous benchmark datasets (CSE-CIC-IDS2018, NF-BoT-IoT-v2, and IoT-23) into a harmonized, protocol-agnostic representation. The framework employs a shared autoencoder architecture with dataset-specific projection layers to learn a unified latent manifold. This 15-dimensional space captures the underlying semantics of attack patterns (e.g., volumetric vs. signaling) across multiple domains, while dataset-specific decoders preserve reconstruction fidelity through alternating multi-domain training. To identify complex micro-signatures within this manifold, the framework utilizes a synergistic hybrid convolutional neural network–deep neural network (CNN–DNN) classifier, where the CNN extracts spatial latent patterns and the DNN performs global classification across twenty-five distinct classes. Class imbalance is addressed through resampling strategies such as adaptive synthetic sampling (ADASYN) and edited nearest neighbors (ENN). Experimental results demonstrate remarkable performance, achieving 99.76% accuracy for binary classification and 99.54% accuracy for multi-class classification on the merged dataset, with strong generalization confirmed on individual datasets. These findings indicate that the shared autoencoder-based CNN–DNN framework, through its unique feature alignment and spatial extraction capabilities, significantly strengthens intrusion detection across diverse and heterogeneous environments. Full article
Show Figures

Figure 1

23 pages, 649 KB  
Article
Manifold Causal Conditional Deep Networks for Heterogeneous Treatment Effect Estimation and Policy Evaluation
by Jong-Min Kim
Mathematics 2026, 14(4), 738; https://doi.org/10.3390/math14040738 - 22 Feb 2026
Cited by 1 | Viewed by 301
Abstract
We present a comprehensive framework for estimating heterogeneous treatment effects and evaluating decision-making policies in high-dimensional settings. Our approach combines nonlinear manifold learning techniques—UMAP, t-SNE, and Isomap—with a Causal Conditional Deep Network (CCDN) to model complex nonlinear interactions among covariates, treatments, and outcomes. [...] Read more.
We present a comprehensive framework for estimating heterogeneous treatment effects and evaluating decision-making policies in high-dimensional settings. Our approach combines nonlinear manifold learning techniques—UMAP, t-SNE, and Isomap—with a Causal Conditional Deep Network (CCDN) to model complex nonlinear interactions among covariates, treatments, and outcomes. Within this framework, we assess five treatment assignment policies—Greedy, Thompson Sampling, Epsilon-Greedy, Random, and a novel LLM-guided Thompson policy—across simulated and real-world datasets, including Adult, Wine Quality, and Boston Housing. Empirical results reveal a fundamental trade-off: exploitative policies like Greedy minimize cumulative regret but underperform in recovering heterogeneous treatment effects, whereas exploratory policies, particularly Random and LLM-Thompson, achieve a lower Conditional Average Treatment Effect Root Mean Squared Error (CATE RMSE) by providing broader coverage of the action–covariate space. Notably, LLM-Thompson consistently delivers strong performance across noisy, real-world datasets, highlighting the advantage of uncertainty-aware exploration in capturing treatment heterogeneity. Overall, the framework demonstrates that integrating manifold-informed deep networks with principled exploration strategies enhances both policy optimization and individualized treatment effect estimation in high-dimensional, complex environments. Full article
Show Figures

Figure 1

20 pages, 4997 KB  
Article
A Data-Driven Reduced-Order Model for Rotary Kiln Temperature Field Prediction Using Autoencoder and TabPFN
by Ya Mao, Yuhang Li, Yanhui Lai and Fangshuo Fan
Appl. Sci. 2026, 16(4), 2029; https://doi.org/10.3390/app16042029 - 18 Feb 2026
Viewed by 258
Abstract
The accurate reconstruction of the internal temperature field in rotary kilns is critical for optimizing the clinker calcination process and ensuring energy efficiency. In this study, a rapid and high-fidelity surrogate modeling framework is proposed, utilizing snapshot ensembles generated by full-order Computational Fluid [...] Read more.
The accurate reconstruction of the internal temperature field in rotary kilns is critical for optimizing the clinker calcination process and ensuring energy efficiency. In this study, a rapid and high-fidelity surrogate modeling framework is proposed, utilizing snapshot ensembles generated by full-order Computational Fluid Dynamics (CFD) simulations to reconstruct the temperature field of the axial center section. The framework incorporates a symmetric Autoencoder (AE) coupled with a TabPFN network as its core components. Capitalizing on the kiln’s strong axial symmetry, this reduction–regression system efficiently maps the high-dimensional nonlinear thermodynamic topology of the central section into a compact low-dimensional latent manifold via AE, while utilizing TabPFN to establish a robust mapping between operating boundary conditions and these latent features. By leveraging the In-Context Learning (ICL) mechanism for prior-data fitting, TabPFN effectively overcomes the data scarcity inherent in high-cost CFD sampling. Predictive results demonstrate that the model achieves a coefficient of determination (R2) of 0.897 for latent feature regression, outperforming traditional algorithms by 6.53%. In terms of field reconstruction on the test set, the model yields an average temperature error of 15.31 K. Notably, 93.83% of the nodal errors are confined within a narrow range of 0–50 K, and the reconstructed distributions exhibit high consistency with the CFD benchmarks. Furthermore, compared to the hours required for full-scale simulations, the inference time is reduced to 0.45 s, representing a speedup of four orders of magnitude. Consequently, the predictive system demonstrates excellent accuracy and efficiency, serving as an effective substitute for traditional models to realize online monitoring and intelligent optimization. Full article
(This article belongs to the Special Issue Fuel Cell Technologies in Power Generation and Energy Recovery)
Show Figures

Figure 1

22 pages, 4598 KB  
Article
Deep Learning Based Correction Algorithms for 3D Medical Reconstruction in Computed Tomography and Macroscopic Imaging
by Tomasz Les, Tomasz Markiewicz, Malgorzata Lorent, Miroslaw Dziekiewicz and Krzysztof Siwek
Appl. Sci. 2026, 16(4), 1954; https://doi.org/10.3390/app16041954 - 15 Feb 2026
Viewed by 379
Abstract
This paper introduces a hybrid two-stage registration framework for reconstructing three-dimensional (3D) kidney anatomy from macroscopic slices, using CT-derived models as the geometric reference standard. The approach addresses the data-scarcity and high-distortion challenges typical of macroscopic imaging, where fully learning-based registration (e.g., VoxelMorph) [...] Read more.
This paper introduces a hybrid two-stage registration framework for reconstructing three-dimensional (3D) kidney anatomy from macroscopic slices, using CT-derived models as the geometric reference standard. The approach addresses the data-scarcity and high-distortion challenges typical of macroscopic imaging, where fully learning-based registration (e.g., VoxelMorph) often fails to generalize due to limited training diversity and large nonrigid deformations that exceed the capture range of unconstrained convolutional filters. In the proposed pipeline, the Optimal Cross-section Matching (OCM) algorithm first performs constrained global alignment—translation, rotation, and uniform scaling—to establish anatomically consistent slice initialization. Next, a lightweight deep-learning refinement network, inspired by VoxelMorph, predicts residual local deformations between consecutive slices. The core novelty of this architecture lies in its hierarchical decomposition of the registration manifold: the OCM acts as a deterministic geometric anchor that neutralizes high-amplitude variance, thereby constraining the learning task to a low-dimensional residual manifold. This hybrid OCM + DL design integrates explicit geometric priors with the flexible learning capacity of neural networks, ensuring stable optimization and plausible deformation fields even with few training examples. Experiments on an original dataset of 40 kidneys demonstrated that the OCM + DL method achieved the highest registration accuracy across all evaluated metrics: NCC = 0.91, SSIM = 0.81, Dice = 0.90, IoU = 0.81, HD95 = 1.9 mm, and volumetric agreement DCVol = 0.89. Compared to single-stage baselines, this represents an average improvement of approximately 17% over DL-only and 14% over OCM-only, validating the synergistic contribution of the proposed hybrid strategy over standalone iterative or data-driven methods. The pipeline maintains physical calibration via Hough-based grid detection and employs Bézier-based contour smoothing for robust meshing and volume estimation. Although validated on kidney data, the proposed framework generalizes to other soft-tissue organs reconstructed from optical or photographic cross-sections. By decoupling interpretable global optimization from data-efficient deep refinement, the method advances the precision, reproducibility, and anatomical realism of multimodal 3D reconstructions for surgical planning, morphological assessment, and medical education. Full article
(This article belongs to the Special Issue Engineering Applications of Hybrid Artificial Intelligence Tools)
Show Figures

Figure 1

Back to TopTop