Physics-Informed LSTM with Adaptive Parameter Updating for Non-Stationary Time Series: A Case Study on Disconnector Health Monitoring
Highlights
- Proposed a Hybrid Physics-Informed LSTM framework with thermal constraints.
- Developed an Adaptive Parameter Updating algorithm for inverse problems.
- Achieved 35% error reduction in long-term recursive forecasting.
- Provides a robust method for solving time-varying ODEs in deep learning.
- Mitigates parameter drift in non-stationary dynamic systems.
- Enhances physical consistency through online parameter tracking.
Abstract
1. Introduction
2. Mathematical Problem Formulation and Dataset Description
2.1. Mathematical Definition of the Prediction Task
2.2. Dataset Description and Preprocessing Strategy
2.3. Overall Methodological Framework
- Data Preprocessing: Cleaning, aligning, and normalizing multi-source heterogeneous monitoring data.
- Model Construction: Building the Hybrid-PI-LSTM network, which includes a physical constraint layer and an adaptive parameter module.
- Prediction and Evaluation: Generating multi-step future temperature trajectories based on model predictions and conducting a comprehensive performance evaluation combined with physical residual analysis.
3. Methodology
3.1. LSTM
3.2. Governing Equations: Unsteady Thermal Balance ODE
3.3. Construction of the Hybrid-PI-LSTM Model
- An LSTM-based temporal feature extraction and rolling prediction structure [29];
- A composite loss function embedded with thermal balance constraints;
- An APU mechanism for time-varying outdoor environments.
3.3.1. Network Architecture Design
3.3.2. Formulation of Physics-Informed Loss Function
3.3.3. Adaptive Parameter Updating Algorithm
3.3.4. Algorithmic Implementation Procedure
| Algorithm 1. The pseudo-code of the proposed Hybrid-PI-LSTM modeling approach | |
| 1: | Input: Multi-factor time-series measurements of length at time including current, ambient temperature, humidity, wind speed, and solar radiation, as well as the contact temperature . |
| 2: | Output: The trained Hybrid-PI-LSTM model parameters , and predictions with evaluation results. |
| 3: | Hyperparameters: • Network Architecture: 1 hidden layer, 32 neurons, ReLU activation, Input window . • Optimization: Epochs . Batch size: 32. Learning rate . • Physics Parameters: Initial values for . • Loss weights: Physics-consistency loss weight ; • Adaptive Thresholds: Re-estimation error threshold , Correction step , Update iterations . |
| 4: | Normalize input features. Construct sliding window sequences of length . Partition dataset into training, validation, and testing subsets. Initialize learnable physical parameters and the dynamic correction network. |
| 5: | for epoch in range (1, ) do |
| 6: | for each batch do |
| 7: | Generate state-dependent physical parameters and compute final temperature prediction . |
| 8: | Compute Data Loss , Physical Loss and Total loss . |
| 9: | Compute gradient backpropagation and update model parameters using the Adam optimizer. |
| 10: | end for |
| 11: | end for |
| 12: | Online Testing Phase: |
| 13: | for current time step t in the test set do |
| 14: | Observation: Receive the newly arrived true temperature . |
| 15: | Trigger Evaluation: Calculate error using the past prediction for step t. |
| 16: | If or t mod 60 = 0 then |
| 17: | APU Execution: Re-estimate physical parameters using historical data within window |
| 18: | end if |
| 19: | Forecasting: Freeze parameters and execute recursive forward rollout to predict the unknown future trajectory |
| 20: | end for |
| 21: | Archive experimental configurations, export numerical results, and generate visualization plots for performance assessment. |
4. Numerical Experiments and Performance Evaluation
4.1. Experimental Setup and Evaluation Metrics
4.1.1. Definition of Error Metrics
4.1.2. Baseline Models and Hyperparameter Settings
- Purely Physics-Based Model: This paper implemented a Finite-Difference Model (FDM) based on the governing energy balance equations. Unlike the proposed APU mechanism, this purely mechanistic solver keeps key equivalent thermal parameters (e.g., heat capacity, convection coefficients) constant during the long-term inference stage, serving as a baseline to highlight the limitations of uncalibrated physics models in non-stationary environments.
- Long-Horizon Sequence Models: To align with the latest advancements in time-series forecasting, this paper included Transformer-family and linear/MLP-centric models. Specifically, benchmarks were performed against Informer [31] (utilizing ProbSparse attention for long sequences), Autoformer [32] (featuring deep decomposition and auto-correlation mechanisms), PatchTST [33] (leveraging patching and channel-independence for performance forecasting), DLinear [34] (an efficient decomposition-linear model), and TSMixer [35] (an advanced lightweight MLP-based architecture).
- Standard Deep Learning and Regression Models: This paper retained foundational architectures, including Plain LSTM (without physical constraints), Ridge Regression, Convolutional Neural Network (CNN) [36], and Gated Recurrent Unit (GRU), to serve as standard data-driven references and ablation baselines.
4.2. Comparative Analysis of Forecasting Accuracy
4.2.1. Statistical Analysis of Multi-Step Prediction Errors
4.2.2. Temporal Stability and Error Distribution Analysis
4.2.3. Performance Evaluation Under Realistic Forecast Inputs
4.3. Ablation Study on Constraint Mechanisms
- w/o Physics: Removes all physical constraints, degenerating into a purely data-driven LSTM model.
- w/ Static-Phys: Introduces thermal balance residual constraints but maintains constant thermal parameters during inference.
- Hybrid-PI-LSTM: The complete model proposed in this paper, including physical constraints and the online parameter-adaptive re-estimation mechanism.
4.4. Computational Complexity and Edge Deployment Feasibility
4.5. Parameter Identification and Sensitivity Analysis
5. Conclusions and Limitations
5.1. Conclusions
5.2. Limitations and Future Work
- Inference Cost, Model Fidelity, and Deployment Constraints: The APU inherently introduces a gradient-based optimization loop during inference. While the runtime analysis confirms that the 8.1 ms latency is minimal for the 2 min sampling interval under the current lumped-parameter ODE constraint, this computational overhead could pose a bottleneck in two specific scenarios. First, the additional computation may become prohibitive if the framework is scaled to ultra-high-frequency transient monitoring tasks (e.g., kHz-level sampling). Second, the gradient computation within the APU loop would exponentially increase the inference latency if the physical constraints are expanded to encompass high-fidelity spatial PDEs for 3D thermal field analysis. Balancing high-fidelity multi-physics constraints with edge-device computational limits remains a critical direction for future lightweight deployments.
- Optimization Stability under Extreme Physical Degradation: The APU is effective at tracking continuous, slowly varying concept drifts (e.g., progressive surface oxidation or seasonal microclimate shifts). However, in scenarios involving sudden, discontinuous physical damage (e.g., abrupt contact fracture or catastrophic sensor failure), the gradient-based residual re-estimation might encounter non-convex optimization challenges. In such extreme edge cases, the algorithm could experience temporary parameter instability before convergence. Future work will explore integrating bounded-optimization solvers or physical-rule-based fallback mechanisms to enhance algorithmic robustness under extreme fault transients.
Author Contributions
Funding
Data Availability Statement
Acknowledgments
Conflicts of Interest
References
- Lan, Z.J.; Wang, Y.P.; Wu, J.L.; Fan, Y. Studies on overheating fault of disconnecting switch. IOP Conf. Ser. Mater. Sci. Eng. 2020, 793, 012024. [Google Scholar] [CrossRef]
- Jia, Y.; Li, Y.; Tao, J.; Yang, J.; Zhao, K. Numerical simulation and experimental analysis on abnormal overheating of GW6B-252 high voltage disconnector. High Volt. Appar. 2020, 56, 240–246. [Google Scholar] [CrossRef]
- Mardegan, C.S.; Shipp, D.D. Anatomy of a complex electrical failure and its forensics analysis. IEEE Trans. Ind. Appl. 2014, 50, 2910–2918. [Google Scholar] [CrossRef]
- Yang, K.; Li, W.; Song, G.; Cheng, P. Operation analysis of HV switchgears in 2013. Smart Grid 2014, 2, 32–41. [Google Scholar] [CrossRef]
- Mahmoud, M.A.; Md Nasir, N.R.; Gurunathan, M.; Raj, P.; Mostafa, S.A. The Current State of the Art in Research on Predictive Maintenance in Smart Grid Distribution Network: Fault’s Types, Causes, and Prediction Methods—A Systematic Review. Energies 2021, 14, 5078. [Google Scholar] [CrossRef]
- Ahmad, R.; Kamaruddin, S. An overview of time-based and condition-based maintenance in industrial application. Comput. Ind. Eng. 2012, 63, 135–149. [Google Scholar] [CrossRef]
- Yang, F.; Cheng, P.; Luo, H.; Yang, Y.; Liu, H.; Kang, K. 3-D thermal analysis and contact resistance evaluation of power cable joint. Appl. Therm. Eng. 2016, 93, 1183–1192. [Google Scholar] [CrossRef]
- Wu, X.W.; Shu, N.Q.; Li, L.; Li, H.T.; Peng, H. Finite Element Analysis of Thermal Problems in Gas-Insulated Power Apparatus With Multiple Species Transport Technique. IEEE Trans. Magn. 2014, 50, 321–324. [Google Scholar] [CrossRef]
- Mudhigollam, U.K.; Tiwari, N.; Rao, M.M. Transient thermal analysis of gas insulated switchgear modules using thermal network approach. Int. J. Emerg. Electr. Power Syst. 2024, 25, 163–174. [Google Scholar] [CrossRef]
- Crinon, E.; Evans, J.T. The effect of surface roughness, oxide film thickness and interfacial sliding on the electrical contact resistance of aluminium. Mater. Sci. Eng. A 1998, 242, 121–128. [Google Scholar] [CrossRef]
- Doolgindachbaporn, A.; Callender, G.; Lewin, P.L.; Simonson, E.; Wilson, G. A Top-Oil Thermal Model for Power Transformers That Considers Weather Factors. IEEE Trans. Power Deliv. 2022, 37, 2163–2171. [Google Scholar] [CrossRef]
- Xu, Z.; Zhang, Y.; Xue, F.; Xia, Y.; Jiang, J.; Gao, J. Short-Term Temperature Forecasting of Cable Joint Based on Temporal Convolutional Neural Network. IEEE Access 2024, 12, 132543–132551. [Google Scholar] [CrossRef]
- Xin, L.; Wu, Z.; Wang, Q.; Wu, B.; Zhao, L.; Cheng, J.; Zeng, Q.; Peng, Z. A Deep-Learning Surrogate Model for the Fast Calculation of Temperature Distribution of ±500 kV RIP Bushings. IEEE Trans. Power Deliv. 2024, 39, 3050–3060. [Google Scholar] [CrossRef]
- Zhan, N.; Wang, X.; Wei, B.; Tao, Y.; Huang, Z.; Xiao, J.W. Temperature Prediction of Disconnecting Switch Based on Memory Regression Metric Learning. In Proceedings of the 34th Youth Academic Annual Conference of Chinese Association of Automation (YAC), Jinzhou, China, 6–8 June 2019; pp. 206–210. [Google Scholar] [CrossRef]
- Guo, Y.; Chang, Y.; Lu, B. A review of temperature prediction methods for oil-immersed transformers. Measurement 2025, 239, 115383. [Google Scholar] [CrossRef]
- Yang, L.; Chen, L.; Zhang, F.; Ma, S.; Zhang, Y.; Yang, S. A Transformer Oil Temperature Prediction Method Based on Data-Driven and Multi-Model Fusion. Processes 2025, 13, 302. [Google Scholar] [CrossRef]
- Raissi, M.; Perdikaris, P.; Karniadakis, G.E. Physics-informed neural networks: A deep learning framework for solving forward and inverse problems involving nonlinear partial differential equations. J. Comput. Phys. 2019, 378, 686–707. [Google Scholar] [CrossRef]
- Cho, G.; Zhu, D.; Campbell, J.J.; Wang, M. An LSTM-PINN Hybrid Method to Estimate Lithium-Ion Battery Pack Temperature. IEEE Access 2022, 10, 100594–100604. [Google Scholar] [CrossRef]
- Li, A.; Guan, F.; Hu, H.; Liu, Z.; Xiao, B.; Huang, W.; Yang, Y. Physics-informed DEEPLSTM for digital twin modeling of steam turbine full-operating condition. Appl. Therm. Eng. 2025, 278, 127439. [Google Scholar] [CrossRef]
- Gao, S.; Wang, J.; Yang, D.; Zhang, J.; Wu, L.; Yang, P.; Wang, P. Physics-informed neural networks for predicting the surface temperature of carbon fiber reinforced polymers under laser irradiation. Sci. Rep. 2025, 15, 40598. [Google Scholar] [CrossRef]
- de Jong, S.D.M.; Ghezeljehmeidan, A.G.; van Driel, W.D. Solder joint reliability predictions using physics-informed machine learning. Microelectron. Reliab. 2025, 172, 115797. [Google Scholar] [CrossRef]
- Quan, W.; Ma, X.; Shang, Z.; Zhao, K.; Su, M.; Dong, Z.; Zhao, Z. Hybrid physics-data-driven model for temperature field prediction of asphalt pavement based on physics-informed neural network. Constr. Build. Mater. 2025, 489, 142179. [Google Scholar] [CrossRef]
- Wu, M.; Liang, Z.; Bao, S.; Wang, H.; Liu, Y.; Zhang, Z.; Xuan, Q. Bias Correction of SMAP L2 Sea Surface Salinity Based on Physics-Informed Neural Network. Remote Sens. 2025, 17, 3226. [Google Scholar] [CrossRef]
- Kabanikhin, S.I. Definitions and examples of inverse and ill-posed problems. J. Inverse Ill-Posed Probl. 2008, 16, 317–357. [Google Scholar] [CrossRef]
- Cannon, J.R.; DuChateau, P. Structural identification of an unknown source term in a heat equation. Inverse Probl. 1998, 14, 535–551. [Google Scholar] [CrossRef]
- Feng, P.; Karimov, E.T. Inverse source problems for time-fractional mixed parabolic-hyperbolic-type equations. J. Inverse Ill-Posed Probl. 2015, 23, 339–353. [Google Scholar] [CrossRef]
- Hochreiter, S.; Schmidhuber, J. Long Short-Term Memory. Neural Comput. 1997, 9, 1735–1780. [Google Scholar] [CrossRef]
- Stosur, M.; Szewczyk, M.; Sowa, K.; Dawidowski, P.; Balcerek, P. Thermal behaviour analyses of gas-insulated switchgear compartment using thermal network method. IET Gener. Transm. Distrib. 2016, 10, 2833–2841. [Google Scholar] [CrossRef]
- Cai, X.; Li, D.; Zou, Y.; Liu, Z.; Heidari, A.A.; Chen, H. A hybrid wind speed forecasting model with rolling mapping decomposition and temporal convolutional networks. Energy 2025, 324, 135673. [Google Scholar] [CrossRef]
- Wang, W.; Guo, H.; Liu, S.; Xin, Y.; Li, G.; Wang, Y. Dynamic-parameter physics-informed neural networks for short-term photovoltaic power prediction: Integrating physics-informed and data driven. Appl. Energy 2025, 401, 126764. [Google Scholar] [CrossRef]
- Zhou, H.; Zhang, S.; Peng, J.; Zhang, S.; Li, J.; Xiong, H.; Zhang, W. Informer: Beyond efficient transformer for long sequence time-series forecasting. In Proceedings of the Thirty-Fifth AAAI Conference on Artificial Intelligence (AAAI-21), Virtual, 2–9 February 2021; Volume 35, pp. 11106–11115. [Google Scholar] [CrossRef]
- Wu, H.; Xu, J.; Wang, J.; Long, M. Autoformer: Decomposition transformers with auto-correlation for long-term series forecasting. Adv. Neural Inf. Process. Syst. 2021, 34, 22419–22430. [Google Scholar]
- Nie, Y.; Nguyen, N.H.; Sinthong, P.; Kalagnanam, J. A Time Series is Worth 64 Words: Long-term Forecasting with Transformers. In Proceedings of the 2023 International Conference on Learning Representations, Kigali, Rwanda, 1–5 May 2023; Available online: https://openreview.net/forum?id=Jbdc0vTOcol (accessed on 10 March 2026).
- Zeng, A.; Chen, M.; Zhang, L.; Qi, Q. Are transformers effective for time series forecasting? In Proceedings of the 2023 AAAI Conference on Artificial Intelligence, Washington DC, USA, 7–8 February 2023; Volume 37, pp. 11121–11128. [Google Scholar] [CrossRef]
- Ekambaram, V.; Jati, A.; Nguyen, N.; Sinthong, P.; Kalagnanam, J. TSMixer: Lightweight MLP-Mixer Model for Multivariate Time Series Forecasting. In Proceedings of the 29th ACM SIGKDD Conference on Knowledge Discovery and Data Mining, Long Beach, CA, USA, 6–10 August 2023; Volume 29, pp. 459–469. [Google Scholar] [CrossRef]
- Gui, Y.; Yin, C.; Liu, R.; Dai, H.; He, L.; Zhao, J.; Ma, Q.; Zhong, C. An Edge-Enabled Lightweight LSTM for the Temperature Prediction of Electrical Joints in Low-Voltage Distribution Cabinets. Sensors 2025, 25, 6816. [Google Scholar] [CrossRef]










| Parameter | Range |
|---|---|
| Hidden Units | 16~64 |
| Layers | 1~3 |
| Initial Learning Rate | 1 × 10−4~1 × 10−3 |
| Training Epochs | 100~2000 |
| Batch Size | 32~128 |
| Window Length | 6 |
| 1 × 10−3 |
| Parameter | Value |
|---|---|
| 300 | |
| 10 | |
| 0.3 | |
| 0.5 | |
| 1.5 | |
| 2.0 |
| H | MAE | MAPE (%) | MSE | RMSE |
|---|---|---|---|---|
| 180 | 0.281 | 0.939 | 0.100 | 0.316 |
| 360 | 0.774 | 2.102 | 1.405 | 1.185 |
| 540 | 1.260 | 3.310 | 2.923 | 1.710 |
| 720 | 1.079 | 2.895 | 2.285 | 1.512 |
| Error Interval | Hybrid-PI-LSTM | LSTM | Ridge Regression | CNN | GRU | Dlinear | PatchTST | TSMixer | Informer | Auto Former |
|---|---|---|---|---|---|---|---|---|---|---|
| [−1.5, −1.3) | 0 | 0 | 0 | 2 | 0 | 0 | 0 | 0 | 0 | 0 |
| [−1.3, −1.1) | 0 | 0 | 0 | 80 | 0 | 0 | 0 | 0 | 0 | 0 |
| [−1.1, −0.9) | 0 | 1 | 0 | 102 | 0 | 0 | 0 | 0 | 0 | 0 |
| [−0.9, −0.7) | 0 | 59 | 5 | 122 | 0 | 0 | 0 | 0 | 0 | 2 |
| [−0.7, −0.5) | 60 | 197 | 14 | 115 | 0 | 0 | 27 | 0 | 27 | 74 |
| [−0.5, −0.3) | 145 | 325 | 36 | 83 | 0 | 31 | 280 | 24 | 379 | 256 |
| [−0.3, −0.1) | 369 | 130 | 87 | 46 | 15 | 90 | 272 | 134 | 292 | 187 |
| [−0.1, 0.1) | 146 | 8 | 164 | 39 | 81 | 79 | 95 | 190 | 22 | 82 |
| [0.1, 0.3) | 0 | 0 | 140 | 122 | 176 | 249 | 44 | 157 | 0 | 115 |
| [0.3, 0.5) | 0 | 0 | 46 | 9 | 134 | 200 | 2 | 68 | 0 | 4 |
| [0.5, 0.7) | 0 | 0 | 46 | 0 | 85 | 70 | 0 | 55 | 0 | 0 |
| [0.7, 0.9) | 0 | 0 | 96 | 0 | 93 | 1 | 0 | 42 | 0 | 0 |
| [0.9, 1.1) | 0 | 0 | 42 | 0 | 53 | 0 | 0 | 38 | 0 | 0 |
| [1.1, 1.3) | 0 | 0 | 31 | 0 | 38 | 0 | 0 | 10 | 0 | 0 |
| [1.3, 1.5) | 0 | 0 | 13 | 0 | 29 | 0 | 0 | 2 | 0 | 0 |
| Model | MAE | MAPE (%) | ||||||
| 180 | 360 | 540 | 720 | 180 | 360 | 540 | 720 | |
| Hybrid-PI-LSTM | 0.074 | 0.205 | 0.266 | 0.230 | 0.240 | 0.571 | 0.711 | 0.628 |
| PatchTST | 0.196 | 0.304 | 0.282 | 0.299 | 0.642 | 0.853 | 0.761 | 0.837 |
| Autoformer | 0.224 | 0.261 | 0.281 | 0.316 | 0.745 | 0.772 | 0.844 | 0.893 |
| Model | RMSE | MSE | ||||||
| 180 | 360 | 540 | 720 | 180 | 360 | 540 | 720 | |
| Hybrid-PI-LSTM | 0.097 | 0.260 | 0.333 | 0.296 | 0.009 | 0.067 | 0.111 | 0.088 |
| PatchTST | 0.227 | 0.341 | 0.329 | 0.331 | 0.051 | 0.116 | 0.109 | 0.110 |
| Autoformer | 0.266 | 0.303 | 0.347 | 0.361 | 0.071 | 0.092 | 0.120 | 0.130 |
| Model | Average Latency (ms/Step) | Occupancy Rate |
|---|---|---|
| Hybrid-PI-LSTM | 8.101 | 0.00675% |
| LSTM | 0.297 | 0.00025% |
| Ridge Regression | 0.055 | 0.00005% |
| CNN | 0.173 | 0.00014% |
| GRU | 0.223 | 0.00019% |
| DLinear | 0.349 | 0.00029% |
| PatchTST | 0.586 | 0.00049% |
| TSMixer | 0.332 | 0.00028% |
| Informer | 0.370 | 0.00031% |
| Autoformer | 0.362 | 0.00030% |
| Step | ||||||
|---|---|---|---|---|---|---|
| 60 | 409.89 | 11.930 | 0.4418 | 0.3176 | 1.2011 | 2.9077 |
| 120 | 418.14 | 12.169 | 0.4615 | 0.2977 | 1.2210 | 2.9663 |
| 180 | 426.56 | 12.412 | 0.4810 | 0.2777 | 1.2410 | 3.0261 |
| 240 | 435.15 | 12.660 | 0.5006 | 0.2578 | 1.2609 | 3.0871 |
| 300 | 443.91 | 12.915 | 0.5205 | 0.2379 | 1.2809 | 3.1493 |
| 360 | 452.85 | 13.175 | 0.5403 | 0.2179 | 1.3008 | 3.2128 |
| 420 | 461.97 | 13.440 | 0.5602 | 0.1980 | 1.3207 | 3.2776 |
| 480 | 471.28 | 13.603 | 0.5733 | 0.1780 | 1.3407 | 3.3376 |
| 540 | 480.77 | 13.336 | 0.5535 | 0.1581 | 1.3606 | 3.2718 |
| 600 | 490.45 | 13.075 | 0.5338 | 0.1381 | 1.3806 | 3.2073 |
| 660 | 500.33 | 12.819 | 0.5141 | 0.1182 | 1.4005 | 3.1441 |
| 720 | 510.41 | 12.578 | 0.4957 | 0.0983 | 1.4205 | 3.0833 |
| Step | ||||||
|---|---|---|---|---|---|---|
| 60 | 453.75 | 14.042 | 0.4116 | 0.3030 | 1.2726 | 2.8508 |
| 120 | 462.88 | 14.324 | 0.4313 | 0.2838 | 1.2925 | 2.9083 |
| 180 | 472.19 | 14.611 | 0.4511 | 0.2639 | 1.3125 | 2.9669 |
| 240 | 481.70 | 14.905 | 0.4708 | 0.2440 | 1.3324 | 3.0267 |
| 300 | 491.40 | 15.205 | 0.4907 | 0.2241 | 1.3523 | 3.0877 |
| 360 | 501.28 | 15.511 | 0.5105 | 0.2041 | 1.3723 | 3.1500 |
| 420 | 511.37 | 15.823 | 0.5304 | 0.1842 | 1.3922 | 3.2135 |
| 480 | 521.65 | 16.142 | 0.5503 | 0.1643 | 1.4121 | 3.2782 |
| 540 | 532.15 | 16.467 | 0.5702 | 0.1443 | 1.4321 | 3.3443 |
| 600 | 542.86 | 16.798 | 0.5900 | 0.1244 | 1.4520 | 3.4118 |
| 660 | 553.79 | 17.136 | 0.6098 | 0.1045 | 1.4719 | 3.4805 |
| 720 | 564.94 | 17.478 | 0.6294 | 0.0845 | 1.4919 | 3.5505 |
Disclaimer/Publisher’s Note: The statements, opinions and data contained in all publications are solely those of the individual author(s) and contributor(s) and not of MDPI and/or the editor(s). MDPI and/or the editor(s) disclaim responsibility for any injury to people or property resulting from any ideas, methods, instructions or products referred to in the content. |
© 2026 by the authors. Licensee MDPI, Basel, Switzerland. This article is an open access article distributed under the terms and conditions of the Creative Commons Attribution (CC BY) license.
Share and Cite
Luo, X.; Yang, L.; Zhang, X.; Chen, Y.; Zhang, Z. Physics-Informed LSTM with Adaptive Parameter Updating for Non-Stationary Time Series: A Case Study on Disconnector Health Monitoring. Mathematics 2026, 14, 970. https://doi.org/10.3390/math14060970
Luo X, Yang L, Zhang X, Chen Y, Zhang Z. Physics-Informed LSTM with Adaptive Parameter Updating for Non-Stationary Time Series: A Case Study on Disconnector Health Monitoring. Mathematics. 2026; 14(6):970. https://doi.org/10.3390/math14060970
Chicago/Turabian StyleLuo, Xuesong, Lin Yang, Xinwei Zhang, Yuhong Chen, and Zhijun Zhang. 2026. "Physics-Informed LSTM with Adaptive Parameter Updating for Non-Stationary Time Series: A Case Study on Disconnector Health Monitoring" Mathematics 14, no. 6: 970. https://doi.org/10.3390/math14060970
APA StyleLuo, X., Yang, L., Zhang, X., Chen, Y., & Zhang, Z. (2026). Physics-Informed LSTM with Adaptive Parameter Updating for Non-Stationary Time Series: A Case Study on Disconnector Health Monitoring. Mathematics, 14(6), 970. https://doi.org/10.3390/math14060970

