Sign in to use this feature.

Years

Between: -

Subjects

remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline

Journals

remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline

Article Types

Countries / Regions

remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline

Search Results (4,237)

Search Parameters:
Keywords = neural information processing

Order results
Result details
Results per page
Select all
Export citation of selected articles as:
17 pages, 3661 KB  
Article
Wavefront Prediction for Adaptive Optics Without Wavefront Sensing Based on EfficientNetV2-S
by Zhiguang Zhang, Zelu Huang, Jiawei Wu, Zhaojun Yan, Xin Li, Chang Liu and Huizhen Yang
Photonics 2026, 13(2), 144; https://doi.org/10.3390/photonics13020144 - 2 Feb 2026
Abstract
Adaptive optics (AO) aims to counteract wavefront distortions caused by atmospheric turbulence and inherent system errors. Aberration recovery accuracy and computational speed play crucial roles in its correction capability. To address the issues of slow wavefront aberration detection speed and low measurement accuracy [...] Read more.
Adaptive optics (AO) aims to counteract wavefront distortions caused by atmospheric turbulence and inherent system errors. Aberration recovery accuracy and computational speed play crucial roles in its correction capability. To address the issues of slow wavefront aberration detection speed and low measurement accuracy in current wavefront sensorless adaptive optics, this paper proposes a wavefront correction method based on the EfficientNetV2-S model. The method utilizes paired focal plane and defocused plane intensity images to directly extract intensity features and reconstruct phase information in a non-iterative manner. This approach enables the direct prediction of wavefront Zernike coefficients from the measured intensity images, specifically for orders 3 to 35, significantly enhancing the real-time correction capability of the AO system. Simulation results show that the root mean square error (RMSE) of the predicted Zernike coefficients for D/r0 values of 5, 10, and 15 are 0.038λ, 0.071λ, and 0.111λ, respectively, outperforming conventional convolutional neural network (CNN), ResNet50/101 and ConvNeXt-T models. The experimental results demonstrate that the EfficientNetV2-S model maintains good wavefront reconstruction and prediction capabilities at D/r0 = 5 and 10, highlighting its high precision and robust wavefront prediction ability. Compared to traditional iterative algorithms, the proposed method offers advantages such as high precision, fast computation, no need for iteration, and avoidance of local minima in processing wavefront aberrations. Full article
(This article belongs to the Special Issue Adaptive Optics: Recent Technological Breakthroughs and Applications)
Show Figures

Figure 1

34 pages, 2320 KB  
Article
Research on a Computing First Network Based on Deep Reinforcement Learning
by Qianwen Xu, Jingchao Wang, Shuangyin Ren, Zhongbo Li and Wei Gao
Electronics 2026, 15(3), 638; https://doi.org/10.3390/electronics15030638 - 2 Feb 2026
Abstract
The joint optimization of computing resources and network routing constitutes a central challenge in Computing First Networks (CFNs). However, existing research has predominantly focused on computation offloading decisions, whereas the cooperative optimization of computing power and network routing remains underexplored. Therefore, this study [...] Read more.
The joint optimization of computing resources and network routing constitutes a central challenge in Computing First Networks (CFNs). However, existing research has predominantly focused on computation offloading decisions, whereas the cooperative optimization of computing power and network routing remains underexplored. Therefore, this study investigates the joint routing optimization problem within the CFN framework. We first propose a computing resource scheduling architecture for CFN, termed SICRSA, which integrates Software-Defined Networking (SDN) and Information-Centric Networking (ICN). Building upon this architecture, we further introduce an ICN-based hierarchical naming scheme for computing services, design a computing service request packet format that extends the IP header, and detail the corresponding service request identification process and workflow. Furthermore, we propose Computing-Aware Routing via Graph and Long-term Dependency Learning (CRGLD), a Graph Neural Network (GNN), and Long Short-Term Memory (LSTM)-based routing optimization algorithm, within the SICRSA framework to address the computing-aware routing (CAR) problem. The algorithm incorporates a decision-making framework grounded in spatiotemporal feature learning, thereby enabling the joint and coordinated selection of computing nodes and transmission paths. Simulation experiments conducted on real-world network topologies demonstrate that CRGLD enhances both the quality of service and the intelligence of routing decisions in dynamic network environments. Moreover, CRGLD exhibits strong generalization capability when confronted with unfamiliar topologies and topological changes, effectively mitigating the poor generalization performance typical of traditional Deep Reinforcement Learning (DRL)-based routing models in dynamic settings. Full article
Show Figures

Figure 1

39 pages, 3699 KB  
Article
Enhancing Decision Intelligence Using Hybrid Machine Learning Framework with Linear Programming for Enterprise Project Selection and Portfolio Optimization
by Abdullah, Nida Hafeez, Carlos Guzmán Sánchez-Mejorada, Miguel Jesús Torres Ruiz, Rolando Quintero Téllez, Eponon Anvi Alex, Grigori Sidorov and Alexander Gelbukh
AI 2026, 7(2), 52; https://doi.org/10.3390/ai7020052 - 1 Feb 2026
Abstract
This study presents a hybrid analytical framework that enhances project selection by achieving reasonable predictive accuracy through the integration of expert judgment and modern artificial intelligence (AI) techniques. Using an enterprise-level dataset of 10,000 completed software projects with verified real-world statistical characteristics, we [...] Read more.
This study presents a hybrid analytical framework that enhances project selection by achieving reasonable predictive accuracy through the integration of expert judgment and modern artificial intelligence (AI) techniques. Using an enterprise-level dataset of 10,000 completed software projects with verified real-world statistical characteristics, we develop a three-step architecture for intelligent decision support. First, we introduce an extended Analytic Hierarchy Process (AHP) that incorporates organizational learning patterns to compute expert-validated criteria weights with a consistent level of reliability (CR=0.04), and Linear Programming is used for portfolio optimization. Second, we propose a machine learning architecture that integrates expert knowledge derived from AHP into models such as Transformers, TabNet, and Neural Oblivious Decision Ensembles through mechanisms including attention modulation, split criterion weighting, and differentiable tree regularization. Third, the hybrid AHP-Stacking classifier generates a meta-ensemble that adaptively balances expert-derived information with data-driven patterns. The analysis shows that the model achieves 97.5% accuracy, a 96.9% F1-score, and a 0.989 AUC-ROC, representing a 25% improvement compared to baseline methods. The framework also indicates a projected 68.2% improvement in portfolio value (estimated incremental value of USD 83.5 M) based on post factum financial results from the enterprise’s ventures.This study is evaluated retrospectively using data from a single enterprise, and while the results demonstrate strong robustness, generalizability to other organizational contexts requires further validation. This research contributes a structured approach to hybrid intelligent systems and demonstrates that combining expert knowledge with machine learning can provide reliable, transparent, and high-performing decision-support capabilities for project portfolio management. Full article
Show Figures

Figure 1

23 pages, 1929 KB  
Article
Inverse Thermal Process Design for Interlayer Temperature Control in Wire-Directed Energy Deposition Using Physics-Informed Neural Networks
by Fuad Hasan, Abderrachid Hamrani, Tyler Dolmetsch, Somnath Somadder, Md Munim Rayhan, Arvind Agarwal and Dwayne McDaniel
J. Manuf. Mater. Process. 2026, 10(2), 52; https://doi.org/10.3390/jmmp10020052 - 1 Feb 2026
Abstract
Wire-directed energy deposition (W-DED) produces steep thermal gradients and rapid heating-cooling cycles due to the moving heat source, where modest variations in process parameters significantly alter heat input per unit length and therefore the full thermal history. This sensitivity makes process tuning by [...] Read more.
Wire-directed energy deposition (W-DED) produces steep thermal gradients and rapid heating-cooling cycles due to the moving heat source, where modest variations in process parameters significantly alter heat input per unit length and therefore the full thermal history. This sensitivity makes process tuning by trial-and-error or repeated FE sweeps expensive, motivating inverse analysis. This work proposes an inverse thermal process design framework that couples single-track experiments, a calibrated finite element (FE) thermal model, and a parametric physics-informed neural network (PINN) surrogate. By using experimentally calibrated heat-loss physics to define the training constraints, the PINN learns a parameterized thermal response from physics alone (no temperature data in the PINN loss), enabling inverse design without repeated FE runs. Thermocouple measurements are used to calibrate the convection film coefficient and emissivity in the FE model, and those parameters are used to train a parametric PINN over continuous ranges of arc power (1.5–3.0 kW) and travel speed (0.005–0.015 m/s) without using temperature data in the loss function. The trained PINN model was validated against the calibrated FE model at 3 probe locations with different power and travel speed combinations. Across these benchmark conditions, the mean absolute errors are between 6.5–17.4 °C, with cooling-tail errors ranging from 1.8–12.1 °C. The trained surrogate is then embedded in a sampling-based inverse optimization loop to identify power-speed combinations that achieve prescribed interlayer temperatures at a fixed dwell time. For target interlayer temperatures of 100, 130, and 160 °C with a 10 s dwell time, the optimized solutions remain within 3.3–5.6 °C of the target according to the PINN, while FE verification is within 4.0–6.6 °C. The results demonstrate that a physics-only parametric PINN surrogate enables inverse thermal process design without repeated FE runs while establishing a single-track baseline for extension to multi-track and multi-layer builds. Full article
22 pages, 7120 KB  
Article
Enhancing Cross-Species Prediction of Leaf Mass per Area from Hyperspectral Remote Sensing Using Fractional Order Derivatives and 1D-CNNs
by Shijie Shan, Qiaozhen Guo, Lu Xu, Weiguo Jiang, Shuo Shi and Yiyun Chen
Remote Sens. 2026, 18(3), 444; https://doi.org/10.3390/rs18030444 - 1 Feb 2026
Abstract
Leaf mass per area (LMA) plays an important role in vegetation productivity, carbon cycling, and remote sensing-based ecosystem monitoring. However, remotely predicting LMA from hyperspectral reflectance remains challenging due to the weak and strongly overlapping spectral response of LMA and spectral variability across [...] Read more.
Leaf mass per area (LMA) plays an important role in vegetation productivity, carbon cycling, and remote sensing-based ecosystem monitoring. However, remotely predicting LMA from hyperspectral reflectance remains challenging due to the weak and strongly overlapping spectral response of LMA and spectral variability across species. To address these limitations, this study proposed an integrated framework that combines a fractional-order spectral derivative (FOD) with a one-dimensional convolutional neural network (1D-CNN) to enhance LMA prediction accuracy and cross-species generalization. Leaf hyperspectral reflectance was processed using FOD with 0–2 orders, and the relationship between FOD-enhanced spectra and LMA was analyzed. Model performance was assessed using (i) overall prediction accuracy by an 8:2 random split between training and test sets, and (ii) cross-species generalization through leave-one-species-out validation. The results demonstrated that the 1D-CNN using a 1.5-order derivative achieved the best performance (R2 = 0.85; RMSE = 11.57 g/m2), outperforming common machine-learning models including partial least squares regression (PLSR), random forest (RF), and support vector regression (SVR). The proposed method also demonstrated great generalization in cross-species prediction. These results indicate that integrating FOD with 1D-CNN effectively enhances LMA-related spectral information and improves LMA prediction across various species. It provides a promising pathway for applying airborne and satellite hyperspectral images in vegetation biochemical parameter mapping, crop monitoring, and ecological assessment. Full article
Show Figures

Figure 1

15 pages, 2879 KB  
Article
The Right PPC Plays an Important Role in the Interaction of Temporal Attention and Expectation: Evidence from a tACS-EEG Study
by Bingbing Fu, Kaishi Lin, Ying Chen, Junjun Zhang, Zhenlan Jin and Ling Li
Biomedicines 2026, 14(2), 336; https://doi.org/10.3390/biomedicines14020336 - 31 Jan 2026
Viewed by 57
Abstract
Background/Objectives: Temporal attention and temporal expectation are two key mechanisms that facilitate perception by prioritizing information at specific moments and by leveraging temporal predictability, respectively. While their behavioral interaction is established, the underlying neural mechanisms remain poorly understood. Building on functional magnetic resonance [...] Read more.
Background/Objectives: Temporal attention and temporal expectation are two key mechanisms that facilitate perception by prioritizing information at specific moments and by leveraging temporal predictability, respectively. While their behavioral interaction is established, the underlying neural mechanisms remain poorly understood. Building on functional magnetic resonance imaging (fMRI) evidence linking temporal attention to parietal cortex activity and the role of alpha oscillations in temporal prediction, we investigated whether the right posterior parietal cortex (rPPC) may be involved in integrating these two processes. Methods: Experiment 1 used a behavioral paradigm to dissociate temporal expectation from attention across 600 ms and 1400 ms intervals. Experiment 2 retained only the 600 ms interval, combining behavioral assessments with electroencephalography (EEG), recording following transcranial alternating current stimulation (tACS) applied to the rPPC to probe neural mechanisms. Results: Experiment 1 showed an attention/expectation interaction exclusively at 600 ms: enhanced expectation improved response times under attended, not unattended, conditions. Experiment 2 replicated these behavioral and event-related potential (ERP) findings. Temporal attention modulated N1 amplitude: in attended conditions, the N1 was significantly more negative under high versus low expectation, while no difference was observed in unattended contexts. Anodal tACS over the rPPC reduced this N1 amplitude difference between high and low attentional expectation conditions to non-significance. Restricting analyses to attended conditions, paired-samples t-tests revealed that alpha-band power differed between high and low expectation under sham tACS, but this difference was absent under anodal tACS, which also attenuated the corresponding behavioral attention/expectation interaction effects. Conclusions: These findings provide suggestive evidence that the rPPC may be key to integrating temporal attention and expectation, occurring in early processing stages and specific to brief intervals. Full article
Show Figures

Figure 1

37 pages, 862 KB  
Review
Mathematical Modeling Techniques in Virtual Reality Technologies: An Integrated Review of Physical Simulation, Spatial Analysis, and Interface Implementation
by Junhyeok Lee, Yong-Hyuk Kim and Kang Hoon Lee
Symmetry 2026, 18(2), 255; https://doi.org/10.3390/sym18020255 - 30 Jan 2026
Viewed by 69
Abstract
Virtual reality (VR) has emerged as a complex technological domain that demands high levels of realism and interactivity. At the core of this immersive experience lies a broad spectrum of mathematical modeling techniques. This survey explores how mathematical foundations support and enhance key [...] Read more.
Virtual reality (VR) has emerged as a complex technological domain that demands high levels of realism and interactivity. At the core of this immersive experience lies a broad spectrum of mathematical modeling techniques. This survey explores how mathematical foundations support and enhance key VR components, including physical simulations, 3D spatial analysis, rendering pipelines, and user interactions. We review differential equations and numerical integration methods (e.g., Euler, Verlet, Runge–Kutta (RK4)) used to simulate dynamic environments, as well as geometric transformations and coordinate systems that enable seamless motion and viewpoint control. The paper also examines the mathematical underpinnings of real-time rendering processes and interaction models involving collision detection and feedback prediction. In addition, recent developments such as physics-informed neural networks, differentiable rendering, and neural scene representations are presented as emerging trends bridging classical mathematics and data-driven approaches. By organizing these elements into a coherent mathematical framework, this work aims to provide researchers and developers with a comprehensive reference for applying mathematical techniques in VR systems. The paper concludes by outlining the open challenges in balancing accuracy and performance and proposes future directions for integrating advanced mathematics into next-generation VR experiences. Full article
(This article belongs to the Special Issue Mathematics: Feature Papers 2025)
Show Figures

Figure 1

21 pages, 3253 KB  
Article
Physics-Informed Neural Network-Based Intelligent Control for Photovoltaic Charge Allocation in Multi-Battery Energy Systems
by Akeem Babatunde Akinwola and Abdulaziz Alkuhayli
Batteries 2026, 12(2), 46; https://doi.org/10.3390/batteries12020046 - 30 Jan 2026
Viewed by 150
Abstract
The rapid integration of photovoltaic (PV) generation into modern power networks introduces significant operational challenges, including intermittent power production, uneven charge distribution, and reduced system reliability in multi-battery energy storage systems. Addressing these challenges requires intelligent, adaptive, and physically consistent control strategies capable [...] Read more.
The rapid integration of photovoltaic (PV) generation into modern power networks introduces significant operational challenges, including intermittent power production, uneven charge distribution, and reduced system reliability in multi-battery energy storage systems. Addressing these challenges requires intelligent, adaptive, and physically consistent control strategies capable of operating under uncertain environmental and load conditions. This study proposes a Physics-Informed Neural Network (PINN)-based charge allocation framework that explicitly embeds physical constraints—namely charge conservation and State-of-Charge (SoC) equalization—directly into the learning process, enabling real-time adaptive control under varying irradiance and load conditions. The proposed controller exploits real-time measurements of PV voltage, current, and irradiance to achieve optimal charge distribution while ensuring converter stability and balanced battery operation. The framework is implemented and validated in MATLAB/Simulink under Standard Test Conditions of 1000 W·m−2 irradiance and 25 °C ambient temperature. Simulation results demonstrate stable PV voltage regulation within the 230–250 V range, an average PV power output of approximately 95 kW, and effective duty-cycle control within the range of 0.35–0.45. The system maintains balanced three-phase grid voltages and currents with stable sinusoidal waveforms, indicating high power quality during steady-state operation. Compared with conventional Proportional–Integral–Derivative (PID) and Model Predictive Control (MPC) methods, the PINN-based approach achieves faster SoC equalization, reduced transient fluctuations, and more than 6% improvement in overall system efficiency. These results confirm the strong potential of physics-informed intelligent control as a scalable and reliable solution for smart PV–battery energy systems, with direct relevance to renewable microgrids and electric vehicle charging infrastructures. Full article
(This article belongs to the Special Issue Control, Modelling, and Management of Batteries)
Show Figures

Figure 1

25 pages, 7647 KB  
Article
Urban Morphology, Deep Learning, and Artificial Intelligence-Based Characterization of Urban Heritage with the Recognition of Urban Patterns
by Elif Sarihan and Éva Lovra
Land 2026, 15(2), 230; https://doi.org/10.3390/land15020230 - 29 Jan 2026
Viewed by 96
Abstract
The tangible patterns of urban heritage sites are composed of complex components, and their interaction is involved in the process of formation and transformation. The past of cities also partially survives in the structure of the settlement, even if many buildings are demolished [...] Read more.
The tangible patterns of urban heritage sites are composed of complex components, and their interaction is involved in the process of formation and transformation. The past of cities also partially survives in the structure of the settlement, even if many buildings are demolished or significantly transformed. In this study, we introduce a model based on the integration of urban morphology, deep learning, and artificial intelligence methods for exploring the tangible patterns of urban heritage areas at different levels of scale. The proposed model is able to define and recognize the characteristics of the basic elements of urban forms at different resolution levels and reveal the patterns of the heritage. The basic principle of the model is the analysis of urban heritage sites located in different parts of the historical city center of Istanbul. We first define the relationship patterns and complexity levels, and form the characteristics of the urban form by using geographic information systems (GIS), based on the cartographic and contemporary maps. We then employ deep-learning-based convolutional neural networks (CNNs) for automatic segmentation, using OpenCV and NumPy in Python to extract streets and blocks on both historical and contemporary map sources. Based on the results integrated with human intelligence and the CNNs model, we finally generate several prompts for AI for better reasoning in the process of pattern recognition. Our results reveal that this integration increases GPT-4o’s assumptions in the pattern recognition process and, thus, it is able to reveal similar results to those obtained from the form features with different levels of specificity that are interdependent and complementary to human assessments. Full article
(This article belongs to the Special Issue Urban Morphology: A Perspective from Space (3rd Edition))
Show Figures

Figure 1

24 pages, 5619 KB  
Article
Streamflow Prediction of Spatio-Temporal Graph Neural Network with Feature Enhancement Fusion
by Le Yan, Dacheng Shan, Xiaorui Zhu, Lingling Zheng, Hongtao Zhang, Ying Li, Jing Li, Tingting Hang and Jun Feng
Symmetry 2026, 18(2), 240; https://doi.org/10.3390/sym18020240 - 29 Jan 2026
Viewed by 200
Abstract
Despite the promise of graph neural networks (GNNs) in hydrological forecasting, existing approaches face critical limitations in capturing dynamic spatiotemporal correlations and integrating physical interpretability. To bridge this gap, we propose a spatial-temporal graph neural network (ST-GNN) that addresses these challenges through three [...] Read more.
Despite the promise of graph neural networks (GNNs) in hydrological forecasting, existing approaches face critical limitations in capturing dynamic spatiotemporal correlations and integrating physical interpretability. To bridge this gap, we propose a spatial-temporal graph neural network (ST-GNN) that addresses these challenges through three key innovations: dynamic graph construction for adaptive spatial correlation learning, a physically-informed feature enhancement layer for soil moisture and evaporation integration, and a hybrid Graph-LSTM module for synergistic spatiotemporal dependency modeling. The temporal and spatial modules of the spatio-temporal graph neural network exhibit a structural symmetry, which enhances the model’s representational capability. By integrating these components, the model effectively represents rainfall-runoff processes. Experimental results across four Chinese watersheds demonstrate ST-GNN’s superior performance, particularly in semi-arid regions where prediction accuracy shows significant improvement. Compared to the best-performing baseline model (ST-GCN), our ST-GNN achieved an average reduction in root mean square error (RMSE) of 6.5% and an average improvement in the coefficient of determination (R2) of 1.8% across 1–8 h forecast lead times. Notably, in the semi-arid Pingyao watershed, the improvements reached 13.3% in RMSE reduction and 2.5% in R2 enhancement. The model incorporates watershed physical characteristics through a feature fusion layer while employing an adaptive mechanism to capture spatiotemporal dependencies, enabling robust watershed-scale forecasting across diverse hydrological conditions. Full article
(This article belongs to the Section Computer)
Show Figures

Figure 1

18 pages, 2183 KB  
Article
Uncovering miRNA–Disease Associations Through Graph Based Neural Network Representations
by Alessandro Orro
Biomedicines 2026, 14(2), 289; https://doi.org/10.3390/biomedicines14020289 - 28 Jan 2026
Viewed by 109
Abstract
Background: MicroRNAs (miRNAs) are an important class of non-coding RNAs that regulate gene expression by binding to target mRNAs and influencing cellular processes such as differentiation, proliferation, and apoptosis. Dysregulation in miRNA expression has been reported to be implicated in many human diseases, [...] Read more.
Background: MicroRNAs (miRNAs) are an important class of non-coding RNAs that regulate gene expression by binding to target mRNAs and influencing cellular processes such as differentiation, proliferation, and apoptosis. Dysregulation in miRNA expression has been reported to be implicated in many human diseases, including cancer, cardiovascular, and neurodegenerative disorders. Identifying disease-related miRNAs is therefore essential for understanding disease mechanisms and supporting biomarker discovery, but time and cost of experimental validation are the main limitations. Methods: We present a graph-based learning framework that models the complex relationships between miRNAs, diseases, and related biological entities within a heterogeneous network. The model employs a message-passing neural architecture to learn structured embeddings from multiple node and edge types, integrating biological priors from curated resources. This network representation enables the inference of novel miRNA–disease associations, even in sparsely annotated regions of the network. The approach was trained and validated on a dataset benchmark using ten replicated experiments to ensure robustness. Results: The method achieved an average AUC–ROC of ~98%, outperforming previously reported computational approaches on the same dataset. Moreover, predictions were consistent across validation folds and robustness analyses were conducted to evaluate stability and highlight the most important information. Conclusions: Integrating heterogeneous biological information and representing it through graph neural network representation learning offers a powerful and generalizable way to predict relevant associations, including miRNA–disease, and provide a robust computational framework to support biomedical discovery and translational research. Full article
(This article belongs to the Special Issue Bioinformatics Analysis of RNA for Human Health and Disease)
Show Figures

Figure 1

29 pages, 2945 KB  
Article
Physics-Informed Neural Network for Denoising Images Using Nonlinear PDE
by Carlos Osorio Quero and Maria Liz Crespo
Electronics 2026, 15(3), 560; https://doi.org/10.3390/electronics15030560 - 28 Jan 2026
Viewed by 187
Abstract
Noise remains a persistent limitation in coherent imaging systems, degrading image quality and hindering accurate interpretation in critical applications such as remote sensing, medical imaging, and non-destructive testing. This paper presents a physics-informed deep learning framework for effective image denoising under complex noise [...] Read more.
Noise remains a persistent limitation in coherent imaging systems, degrading image quality and hindering accurate interpretation in critical applications such as remote sensing, medical imaging, and non-destructive testing. This paper presents a physics-informed deep learning framework for effective image denoising under complex noise conditions. The proposed approach integrates nonlinear partial differential equations (PDEs), including the heat equation, diffusion models, MPMC, and the Zhichang Guo (ZG) method, into advanced neural network architectures such as ResUNet, UNet, U2Net, and Res2UNet. By embedding physical constraints directly into the training process, the framework couples data-driven learning with physics-based priors to enhance noise suppression and preserve structural details. Experimental evaluations across multiple datasets demonstrate that the proposed method consistently outperforms conventional denoising techniques, achieving higher PSNR, SSIM, ENL, and CNR values. These results confirm the effectiveness of combining physics-informed neural networks with deep architectures and highlight their potential for advanced image restoration in real-world, high-noise imaging scenarios. Full article
(This article belongs to the Special Issue Image Processing Based on Convolution Neural Network: 2nd Edition)
Show Figures

Figure 1

25 pages, 876 KB  
Article
Multi-Scale Digital Twin Framework with Physics-Informed Neural Networks for Real-Time Optimization and Predictive Control of Amine-Based Carbon Capture: Development, Experimental Validation, and Techno-Economic Assessment
by Mansour Almuwallad
Processes 2026, 14(3), 462; https://doi.org/10.3390/pr14030462 - 28 Jan 2026
Viewed by 94
Abstract
Carbon capture and storage (CCS) is essential for achieving net-zero emissions, yet amine-based capture systems face significant challenges including high energy penalties (20–30% of power plant output) and operational costs ($50–120/tonne CO2). This study develops and validates a novel multi-scale Digital [...] Read more.
Carbon capture and storage (CCS) is essential for achieving net-zero emissions, yet amine-based capture systems face significant challenges including high energy penalties (20–30% of power plant output) and operational costs ($50–120/tonne CO2). This study develops and validates a novel multi-scale Digital Twin (DT) framework integrating Physics-Informed Neural Networks (PINNs) to address these challenges through real-time optimization. The framework combines molecular dynamics, process simulation, computational fluid dynamics, and deep learning to enable real-time predictive control. A key innovation is the sequential training algorithm with domain decomposition, specifically designed to handle the nonlinear transport equations governing CO2 absorption with enhanced convergence properties.The algorithm achieves prediction errors below 1% for key process variables (R2> 0.98) when validated against CFD simulations across 500 test cases. Experimental validation against pilot-scale absorber data (12 m packing, 30 wt% MEA) confirms good agreement with measured profiles, including temperature (RMSE = 1.2 K), CO2 loading (RMSE = 0.015 mol/mol), and capture efficiency (RMSE = 0.6%). The trained surrogate enables computational speedups of up to four orders of magnitude, supporting real-time inference with response times below 100 ms suitable for closed-loop control. Under the conditions studied, the framework demonstrates reboiler duty reductions of 18.5% and operational cost reductions of approximately 31%. Sensitivity analysis identifies liquid-to-gas ratio and MEA concentration as the most influential parameters, with mechanistic explanations linking these to mass transfer enhancement and reaction kinetics. Techno-economic assessment indicates favorable investment metrics, though results depend on site-specific factors. The framework architecture is designed for extensibility to alternative solvent systems, with future work planned for industrial-scale validation and uncertainty quantification through Bayesian approaches. Full article
(This article belongs to the Section Petroleum and Low-Carbon Energy Process Engineering)
40 pages, 2475 KB  
Review
Research Progress of Deep Learning in Sea Ice Prediction
by Junlin Ran, Weimin Zhang and Yi Yu
Remote Sens. 2026, 18(3), 419; https://doi.org/10.3390/rs18030419 - 28 Jan 2026
Viewed by 137
Abstract
Polar sea ice is undergoing rapid change, with recent record-low extents in both hemispheres, raising the demand for skillful predictions from days to seasons for navigation, ecosystem management, and climate risk assessment. Accurate sea ice prediction is essential for understanding coupled climate processes, [...] Read more.
Polar sea ice is undergoing rapid change, with recent record-low extents in both hemispheres, raising the demand for skillful predictions from days to seasons for navigation, ecosystem management, and climate risk assessment. Accurate sea ice prediction is essential for understanding coupled climate processes, supporting safe polar operations, and informing adaptation strategies. Physics-based numerical models remain the backbone of operational forecasting, but their skill is limited by uncertainties in coupled ocean–ice–atmosphere processes, parameterizations, and sparse observations, especially in the marginal ice zone and during melt seasons. Statistical and empirical models can provide useful baselines for low-dimensional indices or short lead times, yet they often struggle to represent high-dimensional, nonlinear interactions and regime shifts. This review synthesizes recent progress of DL for key sea ice prediction targets, including sea ice concentration/extent, thickness, and motion, and organizes methods into (i) sequential architectures (e.g., LSTM/GRU and temporal Transformers) for temporal dependencies, (ii) image-to-image and vision models (e.g., CNN/U-Net, vision Transformers, and diffusion or GAN-based generators) for spatial structures and downscaling, and (iii) spatiotemporal fusion frameworks that jointly model space–time dynamics. We further summarize hybrid strategies that integrate DL with numerical models through post-processing, emulation, and data assimilation, as well as physics-informed learning that embeds conservation laws or dynamical constraints. Despite rapid advances, challenges remain in generalization under non-stationary climate conditions, dataset shift, and physical consistency (e.g., mass/energy conservation), interpretability, and fair evaluation across regions and lead times. We conclude with practical recommendations for future research, including standardized benchmarks, uncertainty-aware probabilistic forecasting, physics-guided training and neural operators for long-range dynamics, and foundation models that leverage self-supervised pretraining on large-scale Earth observation archives. Full article
Show Figures

Figure 1

21 pages, 1574 KB  
Article
Watershed Encoder–Decoder Neural Network for Nuclei Segmentation of Breast Cancer Histology Images
by Vincent Majanga, Ernest Mnkandla, Donatien Koulla Moulla, Sree Thotempudi and Attipoe David Sena
Bioengineering 2026, 13(2), 154; https://doi.org/10.3390/bioengineering13020154 - 28 Jan 2026
Viewed by 103
Abstract
Recently, deep learning methods have seen major advancements and are preferred for medical image analysis. Clinically, deep learning techniques for cancer image analysis are among the main applications for early diagnosis, detection, and treatment. Consequently, segmentation of breast histology images is a key [...] Read more.
Recently, deep learning methods have seen major advancements and are preferred for medical image analysis. Clinically, deep learning techniques for cancer image analysis are among the main applications for early diagnosis, detection, and treatment. Consequently, segmentation of breast histology images is a key step towards diagnosing breast cancer. However, the use of deep learning methods for image analysis is constrained by challenging features in the histology images. These challenges include poor image quality, complex microscopic tissue structures, topological intricacies, and boundary/edge inhomogeneity. Furthermore, this leads to a limited number of images required for analysis. The U-Net model was introduced and gained significant traction for its ability to produce high-accuracy results with very few input images. Many modifications of the U-Net architecture exist. Therefore, this study proposes the watershed encoder–decoder neural network (WEDN) to segment cancerous lesions in supervised breast histology images. Pre-processing of supervised breast histology images via augmentation is introduced to increase the dataset size. The augmented dataset is further enhanced and segmented into the region of interest. Data enhancement methods such as thresholding, opening, dilation, and distance transform are used to highlight foreground and background pixels while removing unwanted parts from the image. Consequently, further segmentation via the connected component analysis method is used to combine image pixel components with similar intensity values and assign them their respective labeled binary masks. The watershed filling method is then applied to these labeled binary mask components to separate and identify the edges/boundaries of the regions of interest (cancerous lesions). This resultant image information is sent to the WEDN model network for feature extraction and learning via training and testing. Residual convolutional block layers of the WEDN model are the learnable layers that extract the region of interest (ROI), which is the cancerous lesion. The method was evaluated on 3000 images–watershed masks, an augmented dataset. The model was trained on 2400 training set images and tested on 600 testing set images. This proposed method produced significant results of 98.53% validation accuracy, 96.98% validation dice coefficient, and 97.84% validation intersection over unit (IoU) metric scores. Full article
Show Figures

Figure 1

Back to TopTop