Sign in to use this feature.

Years

Between: -

Subjects

remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline

Journals

remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline

Article Types

Countries / Regions

remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline

Search Results (786)

Search Parameters:
Keywords = physics-informed neural networks

Order results
Result details
Results per page
Select all
Export citation of selected articles as:
21 pages, 6300 KiB  
Article
Comparison of Machine Learning Algorithms for Simulating Brightness Temperature Using Data from the Tianjun Soil Moisture Observation Network
by Shaoning Lv, Zixi Liu and Jun Wen
Remote Sens. 2025, 17(16), 2835; https://doi.org/10.3390/rs17162835 - 15 Aug 2025
Abstract
The L-band radiative transfer-forward modeling plays a crucial role in data assimilation for meteorological forecasting. By utilizing information from the underlying surface (typically land surface parameters and variables), such as soil moisture, soil temperature, snow cover, freeze–thaw status, and vegetation, the corresponding brightness [...] Read more.
The L-band radiative transfer-forward modeling plays a crucial role in data assimilation for meteorological forecasting. By utilizing information from the underlying surface (typically land surface parameters and variables), such as soil moisture, soil temperature, snow cover, freeze–thaw status, and vegetation, the corresponding brightness temperatures can be simulated through the physical processes described by radiative transfer models. Data assimilation becomes meaningful when the errors introduced by the simulated brightness temperatures are smaller than the simulation accuracy of the land surface variables. However, radiative transfer models at the L-band cannot accurately simulate TB operationally. In this study, four machine learning methods, including random forest (RF), long short-term memory (LSTM), support vector machine (SVM), and deep neural networks (DNN), are employed to reconstruct the forward relationship from land surface parameters to brightness temperatures, serving as an alternative to traditional radiative transfer models. The performance of these methods is evaluated using ground-truthed soil moisture data, soil texture static data, and leaf area index (LAI). The results indicate that DNN and RF exhibit superior performance, with DNN achieving the lowest average unbiased root mean square error (ubRMSE) of 6.238 K for vertical polarization brightness temperature (TBv) and 9.033 K for horizontal polarization brightness temperature (TBh). Regarding correlation coefficients between the retrieved brightness temperatures and satellite measurements, RF leads for H-polarized TB with a value of 0.943, while both RF and SVM perform well for V-polarized TB with values of 0.930 and 0.932, respectively. In conclusion, our study shows that DNN is the optimal method for retrieving brightness temperatures, outperforming other machine learning approaches regarding error metrics and correlation with satellite measurements. These findings highlight the potential of DNN in improving data assimilation processes in meteorological forecasting. Full article
(This article belongs to the Special Issue Microwave Remote Sensing of Soil Moisture II)
Show Figures

Figure 1

27 pages, 8160 KiB  
Article
Real-Time Prediction of Pressure and Film Height Distribution in Plain Bearings Using Physics-Informed Neural Networks (PINNs)
by Ahmed Saleh, Georg Jacobs, Dhawal Katre, Benjamin Lehmann and Mattheüs Lucassen
Lubricants 2025, 13(8), 360; https://doi.org/10.3390/lubricants13080360 - 14 Aug 2025
Abstract
The increasing application of plain bearings in various industries, especially under challenging conditions like thin lubricating films and high temperatures, necessitates effective monitoring to prevent failures and ensure reliable performance. While sensor-based monitoring incurs significant costs and complex installation due to physical sensors [...] Read more.
The increasing application of plain bearings in various industries, especially under challenging conditions like thin lubricating films and high temperatures, necessitates effective monitoring to prevent failures and ensure reliable performance. While sensor-based monitoring incurs significant costs and complex installation due to physical sensors and data acquisition systems, model-based tracking offers a more cost-effective alternative. Model-based monitoring relies on mathematical or physics-based models to estimate system behaviour, reducing the need for extensive sensor data. However, reliable results depend on real-time capable and precise simulation models. Conventional real-time modelling techniques, including analytical calculations, empirical formulas, and data-driven methods, exhibit significant limitations in real-world applications. Analytical methods often have a restricted range of applicability and do not match the accuracy of numerical methods. Meanwhile, data-driven approaches rely heavily on the quality and quantity of training data and are inherently constrained to their training domain. Recently, Physics-Informed Neural Networks (PINNs) have emerged as a promising solution for model-based monitoring to capture complex system behaviour. This approach combines physical modelling with data-driven learning, allowing for better generalisation beyond the training domain while reducing reliance on extensive data. Thus, this study presents an approach for load monitoring in radial plain bearings using PINNs. It extends the application of PINNs by relying solely on simple sensor inputs, such as radial load and rotational speed, to predict the hydrodynamic pressure and oil film thickness distribution under varying stationary conditions. The real-time model is trained, validated, and evaluated within and beyond the training domain using elastohydrodynamic simulation results. The developed real-time model enables load monitoring in plain bearings by identifying critical hydrodynamic pressure and oil film thickness values using readily available speed and load sensor data under varying stationary conditions. Full article
(This article belongs to the Special Issue New Horizons in Machine Learning Applications for Tribology)
Show Figures

Figure 1

18 pages, 768 KiB  
Article
Uncertainty-Aware Design of High-Entropy Alloys via Ensemble Thermodynamic Modeling and Search Space Pruning
by Roman Dębski, Władysław Gąsior, Wojciech Gierlotka and Adam Dębski
Appl. Sci. 2025, 15(16), 8991; https://doi.org/10.3390/app15168991 - 14 Aug 2025
Abstract
The discovery and design of high-entropy alloys (HEAs) faces significant challenges due to the vast combinatorial design space and uncertainties in thermodynamic data. This work presents a modular, uncertainty-aware computational framework with the primary objective of accelerating the discovery of solid-solution HEA candidates. [...] Read more.
The discovery and design of high-entropy alloys (HEAs) faces significant challenges due to the vast combinatorial design space and uncertainties in thermodynamic data. This work presents a modular, uncertainty-aware computational framework with the primary objective of accelerating the discovery of solid-solution HEA candidates. The proposed pipeline integrates ensemble thermodynamic modeling, Monte Carlo-based estimation, and a structured three-phase pruning algorithm for efficient search space reduction. Key quantitative results are achieved in two main areas. First, for binary alloy thermodynamics, a Bayesian Neural Network (BNN) ensemble trained on domain-informed features predicts mixing enthalpies with high accuracy, yielding a mean absolute error (MAE) of 0.48 kJ/mol—substantially outperforming the classical Miedema model (MAE = 4.27 kJ/mol). These probabilistic predictions are propagated through Monte Carlo sampling to estimate multi-component thermodynamic descriptors, including ΔHmix and the Ω parameter, while capturing predictive uncertainty. Second, in a case study on the Al-Cu-Fe-Ni-Ti system, the framework reduces a 2.4 million (2.4 M) candidate pool to just 91 high-confidence compositions. Final selection is guided by an uncertainty-aware viability metric, P(HEA), and supported by interpretable radar plot visualizations for multi-objective assessment. The results demonstrate the framework’s ability to combine physical priors, probabilistic modeling, and design heuristics into a data-efficient and interpretable pipeline for materials discovery. This establishes a foundation for future HEA optimization, dataset refinement, and adaptive experimental design under uncertainty. Full article
Show Figures

Figure 1

14 pages, 3320 KiB  
Article
Innovative Flow Pattern Identification in Oil–Water Two-Phase Flow via Kolmogorov–Arnold Networks: A Comparative Study with MLP
by Mingyu Ouyang, Haimin Guo, Liangliang Yu, Wenfeng Peng, Yongtuo Sun, Ao Li, Dudu Wang and Yuqing Guo
Processes 2025, 13(8), 2562; https://doi.org/10.3390/pr13082562 - 14 Aug 2025
Viewed by 59
Abstract
As information and sensor technologies advance swiftly, data-driven approaches have emerged as a dominant paradigm in scientific research. In the petroleum industry, precise forecasting of patterns of two-phase flow involving oil and water is essential for enhancing production efficiency and ensuring safety. This [...] Read more.
As information and sensor technologies advance swiftly, data-driven approaches have emerged as a dominant paradigm in scientific research. In the petroleum industry, precise forecasting of patterns of two-phase flow involving oil and water is essential for enhancing production efficiency and ensuring safety. This study investigates the application of Kolmogorov–Arnold Networks (KAN) for predicting patterns of two-phase flow involving oil and water and compares it with the conventional Multi-Layer Perceptron (MLP) neural network. To obtain real physical data, we conducted the experimental section to simulate the patterns of two-phase flow involving oil and water under various well angles, flow rates, and water cuts at the Key Laboratory of Oil and Gas Resources Exploration Technology of the Ministry of Education, Yangtze University. These data were standardized and used to train both KAN and MLP models. The findings indicate that KAN outperforms the MLP network, achieving 50% faster convergence and 22.2% higher accuracy in prediction. Moreover, the KAN model features a more streamlined structure and requires fewer neurons to attain comparable or superior performance to MLP. This research offers a highly effective and dependable method for predicting patterns of two-phase flow involving oil and water in the dynamic monitoring of production wells. It highlights the potential of KAN to boost the performance of energy systems, particularly in the context of intelligent transformation. Full article
(This article belongs to the Section Energy Systems)
Show Figures

Figure 1

18 pages, 4233 KiB  
Article
Structure–Property Linkage in Alloys Using Graph Neural Network and Explainable Artificial Intelligence
by Benjamin Rhoads, Abigail Hogue, Lars Kotthoff and Samrat Choudhury
Materials 2025, 18(16), 3778; https://doi.org/10.3390/ma18163778 - 12 Aug 2025
Viewed by 267
Abstract
Deep learning tools have recently shown significant potential for accelerating the prediction of microstructure–property linkage in materials. While deep neural networks like convolution neural networks (CNNs) can extract physics information from 3D microstructure images, they often require a large network architecture and substantial [...] Read more.
Deep learning tools have recently shown significant potential for accelerating the prediction of microstructure–property linkage in materials. While deep neural networks like convolution neural networks (CNNs) can extract physics information from 3D microstructure images, they often require a large network architecture and substantial training time. In this research, we trained a graph neural network (GNN) using phase field generated microstructures of Ni-Al alloys to predict the evolution of mechanical properties. We found that a single GNN is capable of accurately predicting the strengthening of Ni-Al alloys with microstructures of varying sizes and dimensions, which cannot otherwise be done with a CNN. Additionally, GNN requires significantly less GPU utilization than CNN and offers more interpretable explanation of predictions using saliency analysis as features are manually defined in the graph. We also utilize explainable artificial intelligence tool Bayesian Inference to determine the coefficients in the power law equation that governs coarsening of precipitates. Overall, our work demonstrates the ability of the GNN to accurately and efficiently extract relevant information from material microstructures without having restrictions on microstructure size or dimension and offers an interpretable explanation. Full article
Show Figures

Figure 1

30 pages, 7155 KiB  
Article
An Improved Causal Physics-Informed Neural Network Solution of the One-Dimensional Cahn–Hilliard Equation
by Jinyu Hu and Jun-Jie Huang
Appl. Sci. 2025, 15(16), 8863; https://doi.org/10.3390/app15168863 - 11 Aug 2025
Viewed by 135
Abstract
Physics-Informed Neural Networks (PINNs) provide a promising framework for solving partial differential equations (PDEs). By incorporating temporal causality, Causal PINN improves training stability in time-dependent problems. However, applying Causal PINN to higher-order nonlinear PDEs, such as the Cahn–Hilliard equation (CHE), presents notable challenges [...] Read more.
Physics-Informed Neural Networks (PINNs) provide a promising framework for solving partial differential equations (PDEs). By incorporating temporal causality, Causal PINN improves training stability in time-dependent problems. However, applying Causal PINN to higher-order nonlinear PDEs, such as the Cahn–Hilliard equation (CHE), presents notable challenges due to the inefficient utilization of temporal information. This inefficiency often results in numerical instabilities and physically inconsistent solutions. This study systematically analyzes the limitations of Causal PINN in solving the one-dimensional CHE. To resolve these issues, we propose a novel framework called APM (Adaptive Progressive Marching)-PINN that enhances temporal representation and improves model robustness. APM-PINN mainly integrates a progressive temporal marching strategy, a causality-based adaptive sampling algorithm, and a residual-based adaptive loss weighting mechanism (effective with the chemical potential reformulation). Comparative experiments on two one-dimensional CHE test cases show that APM-PINN achieves relative errors consistently near 10−3 or even 10−4. It also preserves mass conservation and energy dissipation better. The promising results highlight APM-PINN’s potential for the accurate, stable modeling of complex high-order dynamic systems. Full article
Show Figures

Figure 1

31 pages, 13384 KiB  
Article
Physics-Informed and Explainable Graph Neural Networks for Generalizable Urban Building Energy Modeling
by Rudai Shan, Hao Ning, Qianhui Xu, Xuehua Su, Mengjin Guo and Xiaohan Jia
Appl. Sci. 2025, 15(16), 8854; https://doi.org/10.3390/app15168854 - 11 Aug 2025
Viewed by 190
Abstract
Urban building energy prediction is a critical challenge for sustainable city planning and large-scale retrofit prioritization. However, traditional data-driven models struggle to capture real urban environments’ spatial and morphological complexity. In this study, we systematically benchmark a range of graph-based neural networks (GNNs)—including [...] Read more.
Urban building energy prediction is a critical challenge for sustainable city planning and large-scale retrofit prioritization. However, traditional data-driven models struggle to capture real urban environments’ spatial and morphological complexity. In this study, we systematically benchmark a range of graph-based neural networks (GNNs)—including graph convolutional network (GCN), GraphSAGE, and several physics-informed graph attention network (GAT) variants—against conventional artificial neural network (ANN) baselines, using both shape coefficient and energy use intensity (EUI) stratification across three distinct residential districts. Extensive ablation and cross-district generalization experiments reveal that models explicitly incorporating interpretable physical edge features, such as inter-building distance and angular relation, achieve significantly improved prediction accuracy and robustness over standard approaches. Among all models, GraphSAGE demonstrates the best overall performance and generalization capability. At the same time, the effectiveness of specific GAT edge features is found to be district-dependent, reflecting variations in local morphology and spatial logic. Furthermore, explainability analysis shows that the integration of domain-relevant spatial features enhances model interpretability and provides actionable insight for urban retrofit and policy intervention. The results highlight the value of physics-informed GNNs (PINN) as a scalable, transferable, and transparent tool for urban energy modeling, supporting evidence-based decision making in the context of aging residential building upgrades and sustainable urban transformation. Full article
(This article belongs to the Special Issue AI-Assisted Building Design and Environment Control)
Show Figures

Figure 1

18 pages, 18060 KiB  
Article
A Cross-Modal Multi-Layer Feature Fusion Meta-Learning Approach for Fault Diagnosis Under Class-Imbalanced Conditions
by Haoyu Luo, Mengyu Liu, Zihao Deng, Zhe Cheng, Yi Yang, Guoji Shen, Niaoqing Hu, Hongpeng Xiao and Zhitao Xing
Actuators 2025, 14(8), 398; https://doi.org/10.3390/act14080398 - 11 Aug 2025
Viewed by 170
Abstract
In practical applications, intelligent diagnostic methods for actuator-integrated gearboxes in industrial driving systems encounter challenges such as the scarcity of fault samples and variable operating conditions, which undermine diagnostic accuracy. This paper introduces a multi-layer feature fusion meta-learning (MLFFML) approach to address fault [...] Read more.
In practical applications, intelligent diagnostic methods for actuator-integrated gearboxes in industrial driving systems encounter challenges such as the scarcity of fault samples and variable operating conditions, which undermine diagnostic accuracy. This paper introduces a multi-layer feature fusion meta-learning (MLFFML) approach to address fault diagnosis problems in cross-condition scenarios with class imbalance. First, meta-training is performed to develop a mature fault diagnosis model on the source domain, obtaining cross-domain meta-knowledge; subsequently, meta-testing is conducted on the target domain, extracting meta-features from limited fault samples and abundant healthy samples to rapidly adjust model parameters. For data augmentation, this paper proposes a frequency-domain weighted mixing (FWM) method that preserves the physical plausibility of signals while enhancing sample diversity. Regarding the feature extractor, this paper integrates shallow and deep features by replacing the first layer of the feature extraction module with a dual-stream wavelet convolution block (DWCB), which transforms actuator vibration or acoustic signals into the time-frequency space to flexibly capture fault characteristics and fuses information from both amplitude and phase aspects; following the convolutional network, an encoder layer of the Transformer network is incorporated, containing multi-head self-attention mechanisms and feedforward neural networks to comprehensively consider dependencies among different channel features, thereby achieving a larger receptive field compared to other methods for actuation system monitoring. Furthermore, this paper experimentally investigates cross-modal scenarios where vibration signals exist in the source domain while only acoustic signals are available in the target domain, specifically validating the approach on industrial actuator assemblies. Full article
Show Figures

Figure 1

12 pages, 4710 KiB  
Article
Generation of Higher-Order Hermite–Gaussian Modes Based on Physical Model and Deep Learning
by Tai Chen, Chengcai Jiang, Jia Tao, Long Ma and Longzhou Cao
Photonics 2025, 12(8), 801; https://doi.org/10.3390/photonics12080801 - 10 Aug 2025
Viewed by 230
Abstract
The higher-order Hermite–Gaussian (HG) modes exhibit complex spatial distributions and find a wide range of applications in fields such as quantum information processing, optical communications, and precision measurements. In recent years, the advancement of deep learning has emerged as an effective approach for [...] Read more.
The higher-order Hermite–Gaussian (HG) modes exhibit complex spatial distributions and find a wide range of applications in fields such as quantum information processing, optical communications, and precision measurements. In recent years, the advancement of deep learning has emerged as an effective approach for generating higher-order HG modes. However, the traditional data-driven deep learning method necessitates a substantial amount of labeled data for training, entails a lengthy data acquisition process, and imposes stringent requirements on system stability. In practical applications, these methods are confronted with challenges such as the high cost of data labeling. This paper proposes a method that integrates a physical model with deep learning. By utilizing only a single intensity distribution of the target optical field and incorporating the physical model, the training of the neural network can be accomplished, thereby eliminating the dependency of traditional data-driven deep learning methods on large datasets. Experimental results demonstrate that, compared with the traditional data-driven deep learning method, the method proposed in this paper yields a smaller root mean squared error between the generated higher-order HG modes. The quality of the generated modes is higher, while the training time of the neural network is shorter, indicating greater efficiency. By incorporating the physical model into deep learning, this approach overcomes the limitations of traditional deep learning methods, offering a novel solution for applying deep learning in light field manipulation, quantum physics, and other related fields. Full article
(This article belongs to the Section Data-Science Based Techniques in Photonics)
Show Figures

Figure 1

26 pages, 16020 KiB  
Article
Energy Management of Hybrid Electric Commercial Vehicles Based on Neural Network-Optimized Model Predictive Control
by Jinlong Hong, Fan Yang, Xi Luo, Xiaoxiang Na, Hongqing Chu and Mengjian Tian
Electronics 2025, 14(16), 3176; https://doi.org/10.3390/electronics14163176 - 9 Aug 2025
Viewed by 410
Abstract
Energy management for hybrid electric commercial vehicles, involving continuous power output and discrete gear shifting, constitutes a typical mixed-integer programming (MIP) problem, presenting significant challenges for real-time performance and computational efficiency. To address this, this paper proposes a physics-informed neural network-optimized model predictive [...] Read more.
Energy management for hybrid electric commercial vehicles, involving continuous power output and discrete gear shifting, constitutes a typical mixed-integer programming (MIP) problem, presenting significant challenges for real-time performance and computational efficiency. To address this, this paper proposes a physics-informed neural network-optimized model predictive control (PINN-MPC) strategy. On one hand, this strategy simultaneously optimizes continuous and discrete states within the MPC framework to achieve the integrated objectives of minimizing fuel consumption, tracking speed, and managing battery state-of-charge (SOC). On the other hand, to overcome the prohibitively long solving time of the MIP-MPC, a physics-informed neural network (PINN) optimizer is designed. This optimizer employs the soft-argmax function to handle discrete gear variables and embeds system dynamics constraints using an augmented Lagrangian approach. Validated via hardware-in-the-loop (HIL) testing under two distinct real-world driving cycles, the results demonstrate that, compared to the open-source solver BONMIN, PINN-MPC significantly reduces computation time—dramatically decreasing the average solving time from approximately 10 s to about 5 ms—without sacrificing the combined vehicle dynamic and economic performance. Full article
Show Figures

Figure 1

15 pages, 8859 KiB  
Article
Online Continual Physics-Informed Learning for Quadrotor State Estimation Under Wind-Induced Disturbances
by Yanhui Liu, Shuopeng Wang, Junhua Shi and Lina Hao
Aerospace 2025, 12(8), 704; https://doi.org/10.3390/aerospace12080704 - 8 Aug 2025
Viewed by 127
Abstract
Accurate state estimation for quadrotors under wind-induced disturbances remains a critical challenge in dynamic outdoor environments. Existing model-based and data-driven approaches often struggle with real-time adaptation and catastrophic forgetting when faced with continuous wind disturbances. This paper proposes an online continual physics-informed learning [...] Read more.
Accurate state estimation for quadrotors under wind-induced disturbances remains a critical challenge in dynamic outdoor environments. Existing model-based and data-driven approaches often struggle with real-time adaptation and catastrophic forgetting when faced with continuous wind disturbances. This paper proposes an online continual physics-informed learning framework that integrates physics-informed neural networks with continual backpropagation to address these limitations. The physics-informed neural networks architecture embeds quadrotor dynamics into the neural network training process, ensuring physical consistency, while continual backpropagation enables continual learning from real-time streaming data without compromising previously acquired knowledge. Experimental validation on a simulation platform demonstrates the accuracy and robustness of the framework in ideal and wind-disturbed scenarios. Full article
(This article belongs to the Special Issue UAV System Modelling Design and Simulation)
Show Figures

Figure 1

23 pages, 3810 KiB  
Article
KBNet: A Language and Vision Fusion Multi-Modal Framework for Rice Disease Segmentation
by Xiaoyangdi Yan, Honglin Zhou, Jiangzhang Zhu, Mingfang He, Tianrui Zhao, Xiaobo Tan and Jiangquan Zeng
Plants 2025, 14(16), 2465; https://doi.org/10.3390/plants14162465 - 8 Aug 2025
Viewed by 260
Abstract
High-quality disease segmentation plays a crucial role in the precise identification of rice diseases. Although the existing deep learning methods can identify the disease on rice leaves to a certain extent, these methods often face challenges in dealing with multi-scale disease spots and [...] Read more.
High-quality disease segmentation plays a crucial role in the precise identification of rice diseases. Although the existing deep learning methods can identify the disease on rice leaves to a certain extent, these methods often face challenges in dealing with multi-scale disease spots and irregularly growing disease spots. In order to solve the challenges of rice leaf disease segmentation, we propose KBNet, a novel multi-modal framework integrating language and visual features for rice disease segmentation, leveraging the complementary strengths of CNN and Transformer architectures. Firstly, we propose the Kalman Filter Enhanced Kolmogorov–Arnold Networks (KF-KAN) module, which combines the modeling ability of KANs for nonlinear features and the dynamic update mechanism of the Kalman filter to achieve accurate extraction and fusion of multi-scale lesion information. Secondly, we introduce the Boundary-Constrained Physical-Information Neural Network (BC-PINN) module, which embeds the physical priors, such as the growth law of the lesion, into the loss function to strengthen the modeling of irregular lesions. At the same time, through the boundary punishment mechanism, the accuracy of edge segmentation is further improved and the overall segmentation effect is optimized. The experimental results show that the KBNet framework demonstrates solid performance in handling complex and diverse rice disease segmentation tasks and provides key technical support for disease identification, prevention, and control in intelligent agriculture. This method has good popularization value and broad application potential in agricultural intelligent monitoring and management. Full article
(This article belongs to the Special Issue Advances in Artificial Intelligence for Plant Research)
Show Figures

Figure 1

21 pages, 3338 KiB  
Article
Novel Adaptive Intelligent Control System Design
by Worrawat Duanyai, Weon Keun Song, Min-Ho Ka, Dong-Wook Lee and Supun Dissanayaka
Electronics 2025, 14(15), 3157; https://doi.org/10.3390/electronics14153157 - 7 Aug 2025
Viewed by 203
Abstract
A novel adaptive intelligent control system (AICS) with learning-while-controlling capability is developed for a highly nonlinear single-input single-output plant by redesigning the conventional model reference adaptive control (MRAC) framework, originally based on first-order Lyapunov stability, and employing customized neural networks. The AICS is [...] Read more.
A novel adaptive intelligent control system (AICS) with learning-while-controlling capability is developed for a highly nonlinear single-input single-output plant by redesigning the conventional model reference adaptive control (MRAC) framework, originally based on first-order Lyapunov stability, and employing customized neural networks. The AICS is designed with a simple structure, consisting of two main subsystems: a meta-learning-triggered mechanism-based physics-informed neural network (MLTM-PINN) for plant identification and a self-tuning neural network controller (STNNC). This structure, featuring the triggered mechanism, facilitates a balance between high controllability and control efficiency. The MLTM-PINN incorporates the following: (I) a single self-supervised physics-informed neural network (PINN) without the need for labelled data, enabling online learning in control; (II) a meta-learning-triggered mechanism to ensure consistent control performance; (III) transfer learning combined with meta-learning for finely tailored initialization and quick adaptation to input changes. To resolve the conflict between streamlining the AICS’s structure and enhancing its controllability, the STNNC functionally integrates the nonlinear controller and adaptation laws from the MRAC system. Three STNNC design scenarios are tested with transfer learning and/or hyperparameter optimization (HPO) using a Gaussian process tailored for Bayesian optimization (GP-BO): (scenario 1) applying transfer learning in the absence of the HPO; (scenario 2) optimizing a learning rate in combination with transfer learning; and (scenario 3) optimizing both a learning rate and the number of neurons in hidden layers without applying transfer learning. Unlike scenario 1, no quick adaptation effect in the MLTM-PINN is observed in the other scenarios, as these struggle with the issue of dynamic input evolution due to the HPO-based STNNC design. Scenario 2 demonstrates the best synergy in controllability (best control response) and efficiency (minimal activation frequency of meta-learning and fewer trials for the HPO) in control. Full article
(This article belongs to the Special Issue Nonlinear Intelligent Control: Theory, Models, and Applications)
Show Figures

Figure 1

25 pages, 6742 KiB  
Article
Reservoir Computing with a Single Oscillating Gas Bubble: Emphasizing the Chaotic Regime
by Hend Abdel-Ghani, A. H. Abbas and Ivan S. Maksymov
AppliedMath 2025, 5(3), 101; https://doi.org/10.3390/appliedmath5030101 - 7 Aug 2025
Viewed by 169
Abstract
The rising computational and energy demands of artificial intelligence systems urge the exploration of alternative software and hardware solutions that exploit physical effects for computation. According to machine learning theory, a neural network-based computational system must exhibit nonlinearity to effectively model complex patterns [...] Read more.
The rising computational and energy demands of artificial intelligence systems urge the exploration of alternative software and hardware solutions that exploit physical effects for computation. According to machine learning theory, a neural network-based computational system must exhibit nonlinearity to effectively model complex patterns and relationships. This requirement has driven extensive research into various nonlinear physical systems to enhance the performance of neural networks. In this paper, we propose and theoretically validate a reservoir-computing system based on a single bubble trapped within a bulk of liquid. By applying an external acoustic pressure wave to both encode input information and excite the complex nonlinear dynamics, we showcase the ability of this single-bubble reservoir-computing system to forecast a Hénon benchmarking time series and undertake classification tasks with high accuracy. Specifically, we demonstrate that a chaotic physical regime of bubble oscillation—where tiny differences in initial conditions lead to wildly different outcomes, making the system unpredictable despite following clear rules, yet still suitable for accurate computations—proves to be the most effective for such tasks. Full article
(This article belongs to the Topic A Real-World Application of Chaos Theory)
Show Figures

Figure 1

29 pages, 10437 KiB  
Review
Neuromorphic Photonic On-Chip Computing
by Sujal Gupta and Jolly Xavier
Chips 2025, 4(3), 34; https://doi.org/10.3390/chips4030034 - 7 Aug 2025
Viewed by 332
Abstract
Drawing inspiration from biological brains’ energy-efficient information-processing mechanisms, photonic integrated circuits (PICs) have facilitated the development of ultrafast artificial neural networks. This in turn is envisaged to offer potential solutions to the growing demand for artificial intelligence employing machine learning in various domains, [...] Read more.
Drawing inspiration from biological brains’ energy-efficient information-processing mechanisms, photonic integrated circuits (PICs) have facilitated the development of ultrafast artificial neural networks. This in turn is envisaged to offer potential solutions to the growing demand for artificial intelligence employing machine learning in various domains, from nonlinear optimization and telecommunication to medical diagnosis. In the meantime, silicon photonics has emerged as a mainstream technology for integrated chip-based applications. However, challenges still need to be addressed in scaling it further for broader applications due to the requirement of co-integration of electronic circuitry for control and calibration. Leveraging physics in algorithms and nanoscale materials holds promise for achieving low-power miniaturized chips capable of real-time inference and learning. Against this backdrop, we present the State of the Art in neuromorphic photonic computing, focusing primarily on architecture, weighting mechanisms, photonic neurons, and training, while giving an overall view of recent advancements, challenges, and prospects. We also emphasize and highlight the need for revolutionary hardware innovations to scale up neuromorphic systems while enhancing energy efficiency and performance. Full article
(This article belongs to the Special Issue Silicon Photonic Integrated Circuits: Advancements and Challenges)
Show Figures

Figure 1

Back to TopTop