Sign in to use this feature.

Years

Between: -

Subjects

remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline

Journals

remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline

Article Types

Countries / Regions

remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline

Search Results (3,065)

Search Parameters:
Keywords = autoencoders

Order results
Result details
Results per page
Select all
Export citation of selected articles as:
26 pages, 755 KB  
Article
A Stage-Wise Framework Using Class-Incremental Learning for Unknown DoS Attack Detection
by Juncheng Ge, Yaokai Feng and Kouichi Sakurai
Future Internet 2026, 18(3), 145; https://doi.org/10.3390/fi18030145 (registering DOI) - 12 Mar 2026
Abstract
Denial-of-Service (DoS) attacks remain one of the most dangerous threats in modern Internet environments. They aim to overwhelm networks, servers, or online services with massive volumes of traffic, and maintaining service availability is a core pillar of cybersecurity. More importantly, DoS attack techniques [...] Read more.
Denial-of-Service (DoS) attacks remain one of the most dangerous threats in modern Internet environments. They aim to overwhelm networks, servers, or online services with massive volumes of traffic, and maintaining service availability is a core pillar of cybersecurity. More importantly, DoS attack techniques continue to evolve. However, traditional intrusion detection systems (IDS) trained on fixed attack categories struggle to identify previously unknown DoS attack types and cannot dynamically incorporate newly emerging classes. To address this challenge, this study proposes a stage-wise network intrusion detection framework that integrates unknown attack detection, attack discovery, and class-incremental learning into a unified pipeline. The framework consists of three stages. First, an autoencoder-based anomaly detection approach is used to separate potential unknown DoS attack samples from known classes. Second, a clustering-and-merging strategy is applied to the detected unknown DoS samples to discover emerging attack clusters with similar structural characteristics. Third, the classifier architecture is expanded for each newly discovered cluster through a class-incremental learning mechanism, enabling the continual incorporation of new attack classes while maintaining stable detection performance on previously learned classes. Experimental results on the DoS category of the NSL-KDD dataset demonstrate that the proposed stage-wise framework can effectively isolate samples of unknown DoS attacks, accurately aggregate emerging attack clusters, and incrementally integrate newly discovered attack classes without significantly degrading recognition performance on previously learned classes. These results confirm the capability of the proposed framework to handle progressively emerging unknown DoS attacks. Full article
Show Figures

Figure 1

29 pages, 4988 KB  
Article
MARU-MTL: A Mamba-Enhanced Multi-Task Learning Framework for Continuous Blood Pressure Estimation Using Radar Pulse Waves
by Jinke Xie, Juhua Huang, Chongnan Xu, Hongtao Wan, Xuetao Zuo and Guanfang Dong
Bioengineering 2026, 13(3), 320; https://doi.org/10.3390/bioengineering13030320 - 11 Mar 2026
Abstract
Continuous blood pressure (BP) monitoring is essential for the prevention and management of cardiovascular diseases. Traditional cuff-based methods cause discomfort during repeated measurements, and wearable sensors require direct skin contact, limiting their applicability. Radar-based contactless BP measurement has emerged as a promising alternative. [...] Read more.
Continuous blood pressure (BP) monitoring is essential for the prevention and management of cardiovascular diseases. Traditional cuff-based methods cause discomfort during repeated measurements, and wearable sensors require direct skin contact, limiting their applicability. Radar-based contactless BP measurement has emerged as a promising alternative. However, radar pulse wave (RPW) signals are susceptible to motion artifacts, respiratory interference, and environmental clutter, posing persistent challenges to estimation accuracy and robustness. In this paper, we propose MARU-MTL, a Mamba-enhanced multi-task learning framework for continuous BP estimation using a single millimeter-wave radar sensor. To address signal quality degradation, a Variational Autoencoder-based Signal Quality Index (VAE-SQI) mechanism is proposed to automatically screen RPW segments without manual annotation. To capture long-range temporal dependencies across cardiac cycles, we integrate a Bidirectional Mamba module into the bottleneck of a U-Net backbone, enabling linear-time sequence modeling with respect to the segment length. We also introduce a multi-task learning strategy that couples BP regression with arterial blood pressure waveform reconstruction to strengthen physiological consistency. Extensive experiments on two datasets comprising 55 subjects demonstrate that MARU-MTL achieves mean absolute errors of 3.87 mmHg and 2.93 mmHg for systolic and diastolic BP, respectively, meeting commonly used AAMI error thresholds and achieving metrics comparable to BHS Grade A. Full article
(This article belongs to the Special Issue Contactless Technologies for Patient Health Monitoring)
Show Figures

Figure 1

35 pages, 7787 KB  
Article
LLM-ROM: A Novel Framework for Efficient Spatiotemporal Prediction of Urban Pollutant Dispersion
by Pin Wu, Zhiyi Qin and Yiguo Yang
AI 2026, 7(3), 104; https://doi.org/10.3390/ai7030104 - 11 Mar 2026
Abstract
Deep learning-based flow field prediction for microclimate pollutant dispersion represents an emerging and promising methodology, where effectively integrating meteorological, spatial, and temporal information remains a critical challenge. To address this, we propose a novel non-intrusive reduced-order model (ROM) that synergizes a Dilated Convolutional [...] Read more.
Deep learning-based flow field prediction for microclimate pollutant dispersion represents an emerging and promising methodology, where effectively integrating meteorological, spatial, and temporal information remains a critical challenge. To address this, we propose a novel non-intrusive reduced-order model (ROM) that synergizes a Dilated Convolutional Autoencoder (DCAE) with pre-trained large language models (LLMs). The DCAE, leveraging nonlinear mapping, was employed for extracting low-dimensional spatiotemporal flow field features. These features were then combined with textual prototypes via text embedding to enable few-shot inference using the LLM-based flow field prediction method. To optimize the utilization of pre-trained LLMs, we designed a specialized textual description template tailored for pollutant dispersion data, which enhances the contextual input of meteorological conditions to guide model predictions. Experimental validation through three-dimensional urban canyon simulations conclusively demonstrated the efficacy of the convolutional autoencoder and LLM-based framework in predicting pollutant dispersion flow fields. The proposed method exhibits remarkable transfer learning capabilities across varying street canyon geometries and meteorological conditions while significantly representing a 9.85× acceleration in prediction compared to Computational Fluid Dynamics (CFD). Full article
Show Figures

Figure 1

26 pages, 1169 KB  
Article
HyAR-PPO: Hybrid Action Representation Learning for Incentive-Driven Task Offloading in Vehicular Edge Computing
by Wentao Wang, Mingmeng Li and Honghai Wu
Sensors 2026, 26(6), 1743; https://doi.org/10.3390/s26061743 - 10 Mar 2026
Viewed by 68
Abstract
Vehicular Edge Computing (VEC) can effectively guarantee the service experience of user vehicles, but resource-limited Roadside Units (RSUs) may face insufficient computing capacity during task peak periods. Utilizing Assisting Vehicles (AVs) with idle resources to share computing power can alleviate the pressure on [...] Read more.
Vehicular Edge Computing (VEC) can effectively guarantee the service experience of user vehicles, but resource-limited Roadside Units (RSUs) may face insufficient computing capacity during task peak periods. Utilizing Assisting Vehicles (AVs) with idle resources to share computing power can alleviate the pressure on RSUs. However, existing studies often fail to adequately incentivize selfish assisting vehicles to contribute resources and frequently lack a global optimization perspective from the overall system welfare. To address these challenges, this paper proposes an incentive-driven utility-balanced task offloading framework that aims to maximize social welfare while jointly optimizing resource allocation and profit pricing. Specifically, we first formulate the resource allocation as a Mixed-Integer Nonlinear Programming (MINLP) problem. To solve this problem, we introduce hybrid action representation learning to VEC for the first time and propose the HyAR-PPO algorithm to jointly optimize discrete offloading decisions and continuous resource allocation. This algorithm maps heterogeneous hybrid actions to a unified latent representation space through a Variational Autoencoder for the solution. Subsequently, equilibrium prices among user vehicles, Computation Service Providers (CSPs), and assisting vehicles are determined through Nash bargaining games, satisfying individual rationality constraints and achieving Pareto-optimal fair profit distribution. Experimental results demonstrate that the proposed framework can effectively coordinate multi-party interests. Compared with mainstream methods, the approach based on hybrid action representation learning achieves a significant improvement in social welfare, with its advantages being more pronounced in medium-to-large-scale scenarios. Full article
(This article belongs to the Special Issue Edge Computing for Resource Sharing and Sensing in IoT Systems)
Show Figures

Figure 1

19 pages, 1253 KB  
Article
SFE-GAT: Structure-Feature Evolution Graph Attention Network for Motor Imagery Decoding
by Xin Gao, Guohua Cao and Guoqing Ma
Sensors 2026, 26(5), 1730; https://doi.org/10.3390/s26051730 - 9 Mar 2026
Viewed by 157
Abstract
Motor imagery EEG decoding often relies on static functional connectivity graphs that cannot capture the dynamic, stage-wise reorganization of brain networks during tasks. This paper aims to develop a graph neural network that explicitly simulates this neurodynamic process to improve decoding and provide [...] Read more.
Motor imagery EEG decoding often relies on static functional connectivity graphs that cannot capture the dynamic, stage-wise reorganization of brain networks during tasks. This paper aims to develop a graph neural network that explicitly simulates this neurodynamic process to improve decoding and provide computational insights. This paper proposes a Structure-Feature Evolution Graph Attention Network (SFE-GAT). Its inter-layer evolution mechanism dynamically co-adapts graph topology and node features, mimicking functional network reorganization. Initialized with phase-locking value connectivity and spectral features, the model uses a graph autoencoder with Monte Carlo sampling to iteratively refine edges and embeddings. On the BCI Competition IV-2a dataset, SFE-GAT achieved 77.70% (subject-dependent) and 66.59% (subject-independent) accuracy, outperforming baselines. Evolved graphs showed sparsification and strengthening of task-critical connections, indicating hierarchical processing. This paper advances EEG decoding through a dynamic graph architecture, providing a computational framework for studying the hierarchical organization of motor cortex activity and linking adaptive graph learning with neural dynamics. Full article
(This article belongs to the Section Sensing and Imaging)
Show Figures

Figure 1

31 pages, 5554 KB  
Article
Process–Design Co-Optimisation of Laser Powder Bed Fusion Titanium Gyroid Lattices via Deep Learning
by Alexander Dawes, Ali Abdelhafeez Hassan, Hany Hassanin and Khamis Essa
J. Manuf. Mater. Process. 2026, 10(3), 92; https://doi.org/10.3390/jmmp10030092 - 9 Mar 2026
Viewed by 239
Abstract
Laser powder bed fusion (LPBF) enables controlled gyroid lattices, but mapping both process and design to performance remains challenging when datasets are small and interactions are non-linear. In this study, data-driven models that link energy density and lattice geometry to Young’s modulus and [...] Read more.
Laser powder bed fusion (LPBF) enables controlled gyroid lattices, but mapping both process and design to performance remains challenging when datasets are small and interactions are non-linear. In this study, data-driven models that link energy density and lattice geometry to Young’s modulus and yield strength were established for sheet and network gyroid architectures. To stabilise small-data learning, stacked-autoencoder pre-training was benchmarked against greedy layer-wise pre-training. Compression characterisation data at under-represented energy-density conditions were added to fill data gaps and validate predictions. The models support property-driven design in which given modulus and yield strength targets inform a method that returns feasible combinations of laser powder bed fusion settings and gyroid density and size. Pre-trained models reduced error and captured the relationship between stiffness and density and between strength and density, with yield strength prediction errors of 3.51% for sheet architectures and 8.76% for network architectures. Young’s modulus showed a higher variability that is consistent with sensitivities in LPBF such as surface roughness and thin walls. This work contributes an artificial intelligence method for manufacturing datasets using stacked autoencoder pre-training with fine-tuning, and an inverse-design workflow that maps energy density and gyroid geometry to Young’s modulus and yield strength in titanium lattices. Full article
(This article belongs to the Special Issue Digital Twinning for Manufacturing)
Show Figures

Graphical abstract

82 pages, 28674 KB  
Article
Representation Learning for Maritime Vessel Behaviour: A Three-Stage Pipeline for Robust Trajectory Embeddings
by Ghassan Al-Falouji, Shang Gao, Zhixin Huang, Ben Biesenbach, Peer Kröger, Bernhard Sick and Sven Tomforde
J. Mar. Sci. Eng. 2026, 14(5), 507; https://doi.org/10.3390/jmse14050507 - 8 Mar 2026
Viewed by 108
Abstract
The growing complexity of maritime navigation creates safety challenges that drive the shift toward autonomous systems. Maritime vessel behaviour modelling is critical for safe and efficient autonomous operations. Representation learning offers a systematic approach to learn feature embeddings encoding vessel behaviour for improved [...] Read more.
The growing complexity of maritime navigation creates safety challenges that drive the shift toward autonomous systems. Maritime vessel behaviour modelling is critical for safe and efficient autonomous operations. Representation learning offers a systematic approach to learn feature embeddings encoding vessel behaviour for improved situational awareness and decision-making. We introduce a three-stage representation learning pipeline evaluating six architectures on real-world AIS trajectories. Grouped Masked Autoencoder (GMAE)-Risk Extrapolation (REx) combines group-wise masked autoencoding at the semantic feature level with risk extrapolation regularisation, forcing encoders to learn cross-group dependencies between temporal, kinematic, spatial, and interaction features. DAE and EAE provide robust and uncertainty-aware baselines. Evaluation uses a dual-pipeline framework on two years of Kiel Fjord AIS data (176,787 trajectories, 527,225 segments). Pipeline 1 applies three-stage representation learning using vessel-type classification as encoder selection probe. GMAE-REx achieves 86.03% validation accuracy, outperforming DAE (85.63%), EAE (85.56%), and baselines Transformer (84.93%), TCN (76.27%), LiST (85.12%). Pipeline 2 applies unsupervised clustering to discover intrinsic behavioural structure. Learnt representations consistently outperform expert features on DBCV, conductance, and modularity metrics, organising trajectories by operational context rather than vessel type. This behaviour-oriented organisation enables cross-vessel knowledge transfer for autonomous navigation, VTS monitoring, and safety analysis. Full article
(This article belongs to the Special Issue Intelligent Solutions for Marine Operations)
Show Figures

Figure 1

10 pages, 869 KB  
Article
A Comparative Study of Embedding Methods for Clustering Mathematical Functions
by Hasan Aljabbouli and Ahmad B. Alkhodre
Information 2026, 17(3), 265; https://doi.org/10.3390/info17030265 - 6 Mar 2026
Viewed by 151
Abstract
This paper presents a comprehensive comparative study of deep learning approaches for clustering mathematical functions based on their behavioral patterns. We investigate three distinct embedding strategies: autoencoder-based unsupervised learning, supervised classification with learned embeddings, and direct convolutional feature extraction. Each approach transforms continuous [...] Read more.
This paper presents a comprehensive comparative study of deep learning approaches for clustering mathematical functions based on their behavioral patterns. We investigate three distinct embedding strategies: autoencoder-based unsupervised learning, supervised classification with learned embeddings, and direct convolutional feature extraction. Each approach transforms continuous functions into meaningful vector representations that capture essential mathematical characteristics. Through extensive experimentation and multiple visualization techniques, we demonstrate that supervised learning with explicit function type guidance produces the most discriminative embeddings, achieving average silhouette score 0.6. Our findings provide valuable insights into the relative effectiveness of different representation learning paradigms for mathematical function analysis. Full article
(This article belongs to the Section Artificial Intelligence)
Show Figures

Graphical abstract

36 pages, 2033 KB  
Review
Artificial Intelligence-Driven Discovery and Optimization of Antimicrobial Peptides Targeting ESKAPE Pathogens and Multidrug-Resistant Fungi
by Calina Wu-Mo, Ariana Flores-González, Jezrael Meléndez-Delgado, Valerie Ortiz-Gómez, Héctor Meléndez-González and Rafael Maldonado-Hernández
Microorganisms 2026, 14(3), 591; https://doi.org/10.3390/microorganisms14030591 - 6 Mar 2026
Viewed by 291
Abstract
Antimicrobial resistance (AMR) poses an escalating global health crisis driven by multidrug-resistant ESKAPE pathogens and emerging fungal threats such as Candida auris (C. auris). In response to this urgent need for new therapeutic strategies, antimicrobial peptides (AMPs) represent a mechanistically distinct [...] Read more.
Antimicrobial resistance (AMR) poses an escalating global health crisis driven by multidrug-resistant ESKAPE pathogens and emerging fungal threats such as Candida auris (C. auris). In response to this urgent need for new therapeutic strategies, antimicrobial peptides (AMPs) represent a mechanistically distinct alternative to conventional antibiotics due to their membrane-targeting mechanisms and a reduced propensity for resistance development; however, clinical translation has been hindered by toxicity, instability and manufacturing constraints. Recent advances in artificial intelligence (AI) are reshaping AMP discovery and optimization. Machine learning (ML), deep learning (DL) and transformer-based protein language models now enable improved prediction of antimicrobial activity, selectivity, protease stability and host toxicity. Generative approaches, including variational autoencoders, diffusion models and reinforcement learning, facilitate de novo multi-objective peptide design and pathogen-directed optimization against resistant bacteria and multidrug-resistant fungal pathogens. Integrated design–test–learn pipelines are accelerating iterative peptide engineering by tightly coupling computational prediction with experimental validation. Clinically used peptide-derived antibiotics such as polymyxins and daptomycin demonstrate the therapeutic feasibility of peptide-based antimicrobials, while investigational peptides, including pexiganan, illustrate ongoing translational progress. Although no fully AI-designed AMP has yet achieved regulatory approval, the accelerating convergence of computational modeling and experimental validation suggests a rapidly evolving translational landscape. Advancing scalable, surveillance-informed AI frameworks that integrate resistance data, predictive safety modeling and delivery optimization will be essential to accelerate the clinical translation of next-generation, multi-objective AMPs against high-risk resistant pathogens. Full article
Show Figures

Figure 1

26 pages, 46386 KB  
Article
Predicting Car-Engine Manufacturing Quality with Multi-Sensor Data of Manufacturing Assembly Process
by Xinyu Yang, Qianxi Zhang, Junjie Bao, Xue Wang, Nengchao Wu, Qing Tao, Haijia Wu and Li Liu
Sensors 2026, 26(5), 1651; https://doi.org/10.3390/s26051651 - 5 Mar 2026
Viewed by 195
Abstract
Car engine quality control is fundamentally hindered by extremely high-dimensional, noisy, and imbalanced multi-sensor data. To overcome these challenges, this paper proposes an edge-deployable diagnostic and predictive framework. First, a Sparse Autoencoder (SAE) maps over 12,000 distributed manufacturing parameters into a robust latent [...] Read more.
Car engine quality control is fundamentally hindered by extremely high-dimensional, noisy, and imbalanced multi-sensor data. To overcome these challenges, this paper proposes an edge-deployable diagnostic and predictive framework. First, a Sparse Autoencoder (SAE) maps over 12,000 distributed manufacturing parameters into a robust latent space to filter instrumentation noise. Second, for defect classification, a Class-Specific Weighted Ensemble (CSWE) tackles extreme class imbalance by aggressively penalizing majority-class bias, improving defect interception recall by 7.72%. Third, for transient performance tracking, an Adaptive Regime-Switching Regression (ARSR) replaces manual phase selection with unsupervised regime routing to dynamically weight local experts, reducing relative prediction error by 12%. Rigorously validated across three diverse public datasets (NASA C-MAPSS, AI4I, SECOM) and a physical H4 engine assembly line, the framework achieves an ultra-low inference latency of 80±3 ms, practically reducing the engine rework rate by 7.2%. Full article
(This article belongs to the Section Industrial Sensors)
Show Figures

Figure 1

23 pages, 3612 KB  
Article
A Security Framework for Resilient Smart Grids Based on Self-Organizing Graph Neural Cellular Automata
by Rongxu Hou, Yiying Zhang, Siwei Li, Yeshen He and Pizhen Zhang
Algorithms 2026, 19(3), 195; https://doi.org/10.3390/a19030195 - 5 Mar 2026
Viewed by 173
Abstract
As smart grids evolve into complex cyber-physical systems, conventional static defenses struggle to address time-varying topologies and Advanced Persistent Threats (APTs). We propose the Security Framework for Resilient Smart Grids based on Self-Organizing Graph Neural Cellular Automata (SG-GNC). Specifically, a Neural Homeostatic Embedding [...] Read more.
As smart grids evolve into complex cyber-physical systems, conventional static defenses struggle to address time-varying topologies and Advanced Persistent Threats (APTs). We propose the Security Framework for Resilient Smart Grids based on Self-Organizing Graph Neural Cellular Automata (SG-GNC). Specifically, a Neural Homeostatic Embedding (NHE) mechanism utilizes variational graph autoencoders to construct a continuous health manifold for unsupervised anomaly detection, while a Neural Cellular Automata (NCA) engine employs shared-weight local rules to empower nodes with decentralized self-healing capabilities. Finally, a Generative Adversarial Immunity (GAI) strategy facilitates active defense co-evolution, enhancing robustness against zero-day attacks. Experimental results on the IEEE 118 and 300-bus systems demonstrate an average detection accuracy of 98.23%, significantly outperforming benchmarks. In scenarios involving dynamic topology and zero-day attacks, the framework maintains over 96% accuracy with an inference latency of only 9.45 ms. These findings validate the capability of SG-GNC to provide resilient, endogenous defense in complex heterogeneous environments. Full article
Show Figures

Figure 1

17 pages, 1851 KB  
Article
Spatio-Temporal Graph Neural Networks for Anomaly Detection in Complex Industrial Processes
by Shutian Zhao, Hang Zhang, Bei Sun and Yijun Wang
Sensors 2026, 26(5), 1597; https://doi.org/10.3390/s26051597 - 4 Mar 2026
Viewed by 171
Abstract
With the advancement of intelligent manufacturing strategies, Cyber–Physical Production Systems (CPPSs) generate massive amounts of multidimensional, dynamic, and non-stationary data, posing significant challenges to real-time Process Monitoring. Existing anomaly detection methods often suffer from insufficient feature robustness when dealing with complex spatio-temporal dynamics, [...] Read more.
With the advancement of intelligent manufacturing strategies, Cyber–Physical Production Systems (CPPSs) generate massive amounts of multidimensional, dynamic, and non-stationary data, posing significant challenges to real-time Process Monitoring. Existing anomaly detection methods often suffer from insufficient feature robustness when dealing with complex spatio-temporal dynamics, high computational complexity, and difficulties in effectively capturing incipient faults within deep topological structures. To address these issues, this paper proposes a Spatio-Temporal Variational Graph Statistical Attention Autoencoder (ST-VGSAE). First, the framework performs end-to-end multi-scale temporal decomposition via an Adaptive Lifting Wavelet Module, which enhances feature robustness while effectively suppressing noise. Furthermore, a spatio-temporal Token statistical self-attention mechanism with linear complexity is incorporated. By modulating local features via global statistics, it significantly reduces computational costs while enhancing anomaly discriminability. Experiments on the Tennessee Eastman (TE) process dataset demonstrate that the proposed model significantly outperforms state-of-the-art methods in key metrics such as the Fault Detection Rate and the False Alarm Rate, exhibiting superior noise robustness and real-time performance. Full article
(This article belongs to the Special Issue Advanced Sensing Technologies in Industrial Defect Detection)
Show Figures

Figure 1

33 pages, 5521 KB  
Article
Contrast-Free Myocardial Infarction Segmentation with Attention U-Net
by Khaled Ali Deeb, Yasmeen Alshelle, Hala Hammoud, Andrey Briko, Vladislava Kapravchuk, Alexey Tikhomirov, Amaliya Latypova and Ahmad Hammoud
Diagnostics 2026, 16(5), 768; https://doi.org/10.3390/diagnostics16050768 - 4 Mar 2026
Viewed by 250
Abstract
Background: Cardiovascular magnetic resonance (CMR) is the clinical gold standard for assessing cardiac anatomy and function. However, the manual segmentation of cardiac structures and myocardial infarction (MI) is time-consuming, prone to inter-observer variability, and often depends on contrast-enhanced imaging. Although deep learning (DL) [...] Read more.
Background: Cardiovascular magnetic resonance (CMR) is the clinical gold standard for assessing cardiac anatomy and function. However, the manual segmentation of cardiac structures and myocardial infarction (MI) is time-consuming, prone to inter-observer variability, and often depends on contrast-enhanced imaging. Although deep learning (DL) has enabled substantial automation, challenges remain in generalizability, particularly for MI detection from non-contrast cine CMR. Objective: This study proposes a comprehensive DL-based framework for automatic segmentation of cardiac structures and myocardial infarction using contrast-free cine CMR. Methods: The framework integrates multiple convolutional neural network (CNN) architectures for cardiac structure segmentation with an attention-based deep learning model for MI localization. Post-processing refinement using stacked autoencoders and active contour modeling is applied to improve anatomical consistency. Segmentation performance is evaluated using overlap-based and boundary-based metrics, including the Dice Similarity Coefficient (DSC), Mean Contour Distance (MCD), and Hausdorff Distance (HD). Results: The best-performing model achieved Dice scores of 0.93 ± 0.05 for the left ventricular (LV) cavity, 0.89 ± 0.04 for the LV myocardium, and 0.91 ± 0.06 for the right ventricular (RV) cavity, with consistently low boundary errors across all structures. Myocardial infarction segmentation achieved a Dice score of 0.80 ± 0.02 with high recall, demonstrating reliable infarct localization without the use of contrast agents. Conclusions: By enabling accurate cardiac structure and myocardial infarction segmentation from contrast-free cine CMR, the proposed framework supports broader clinical applicability, particularly for patients with contraindications to gadolinium-based contrast agents and in emergency or resource-limited settings. This approach facilitates scalable, contrast-independent cardiac assessment. Full article
(This article belongs to the Special Issue Artificial Intelligence and Computational Methods in Cardiology 2026)
Show Figures

Figure 1

23 pages, 1094 KB  
Article
Exploring the Limits of Probes for Latent Representation Edits in GPT Models
by Austin L. Davis, Robinson Vasquez Ferrer and Gita Sukthankar
AI 2026, 7(3), 92; https://doi.org/10.3390/ai7030092 - 4 Mar 2026
Viewed by 279
Abstract
This article evaluates the use of probing classifiers to modify the internal hidden state of a chess-playing transformer, which has been trained on sequences of chess moves and can generate new moves with prompted. Probing classifiers are a technique for understanding and modifying [...] Read more.
This article evaluates the use of probing classifiers to modify the internal hidden state of a chess-playing transformer, which has been trained on sequences of chess moves and can generate new moves with prompted. Probing classifiers are a technique for understanding and modifying the operation of neural networks in which a smaller classifier is trained to use the model’s internal representation to learn a probing task. The aim of this research is to discover whether the learned model possesses an editable internal representation of the chess game, despite being trained without explicit information about the rules of chess. We contrast the performance of standard linear probes against Sparse Autoencoders (SAEs), a latent space interpretability technique designed to decompose polysemantic concepts into atomic features via an overcomplete basis. Our experiments demonstrate that linear probes trained directly on the residual stream significantly outperform probes based on SAE latents. When quantifying the success of interventions via the probability of legal moves, linear probe edits achieved an 88% success rate, whereas SAE-based edits yielded only 41%. These findings suggest that while SAEs are valuable for specific interpretability tasks, they do not enhance the controllability of hidden states compared to raw vectors. Finally, we show that the residual stream respects the Markovian property of chess, validating the feasibility of applying consistent edits across different time steps for the same board state. Full article
(This article belongs to the Section AI Systems: Theory and Applications)
Show Figures

Figure 1

25 pages, 1853 KB  
Article
Deep Learning for Process Monitoring and Defect Detection of Laser-Based Powder Bed Fusion of Polymers
by Mohammadali Vaezi, Victor Klamert and Mugdim Bublin
Polymers 2026, 18(5), 629; https://doi.org/10.3390/polym18050629 - 3 Mar 2026
Viewed by 428
Abstract
Maintaining consistent part quality remains a critical challenge in industrial additive manufacturing, particularly in laser-based powder bed fusion of polymers (PBF-LB/P), where crystallization-driven thermal instabilities, governed by isothermal crystallization within a narrow sintering window, precipitate defects such as curling, warping, and delamination. In [...] Read more.
Maintaining consistent part quality remains a critical challenge in industrial additive manufacturing, particularly in laser-based powder bed fusion of polymers (PBF-LB/P), where crystallization-driven thermal instabilities, governed by isothermal crystallization within a narrow sintering window, precipitate defects such as curling, warping, and delamination. In contrast to metal-based systems dominated by melt-pool hydrodynamics, polymer PBF-LB/P requires monitoring strategies capable of resolving subtle spatio-temporal thermal deviations under realistic industrial operating conditions. Although machine learning, particularly convolutional neural networks (CNNs), has demonstrated efficacy in defect detection, a structured evaluation of heterogeneous modeling paradigms and their deployment feasibility in polymer PBF-LB/P remains limited. This study presents a systematic cross-paradigm assessment of unsupervised anomaly detection (autoencoders and generative adversarial networks), supervised CNN classifiers (VGG-16, ResNet50, and Xception), hybrid CNN-LSTM architectures, and physics-informed neural networks (PINNs) using 76,450 synchronized thermal and RGB images acquired from a commercial industrial system operating under closed control constraints. CNN-based models enable frame- and sequence-level defect classification, whereas the PINN component complements detection by providing physically consistent thermal-field regression. The results reveal quantifiable trade-offs between detection performance, temporal robustness, physical consistency, and algorithmic complexity. Pre-trained CNNs achieve up to 99.09% frame-level accuracy but impose a substantial computational burden for edge deployment. The PINN model attains an RMSE of approximately 27 K under quasi-isothermal process conditions, supporting trend-level thermal monitoring. A lightweight hybrid CNN achieves 99.7% validation accuracy with 1860 parameters and a CPU-benchmarked forward-pass inference time of 1.6 ms (excluding sensor acquisition latency). Collectively, this study establishes a rigorously benchmarked, scalable, and resource-efficient deep-learning framework tailored to crystallization-dominated polymer PBF-LB/P, providing a technically grounded basis for real-time industrial quality monitoring. Full article
(This article belongs to the Special Issue Artificial Intelligence in Polymers)
Show Figures

Graphical abstract

Back to TopTop