# Improving Thermochemical Energy Storage Dynamics Forecast with Physics-Inspired Neural Network Architecture

^{*}

## Abstract

**:**

_{2}) system is a promising energy storage technology with relatively high energy density and low cost. However, the existing models available to predict the system’s internal states are computationally expensive. An accurate and real-time capable model is therefore still required to improve its operational control. In this work, we implement a Physics-Informed Neural Network (PINN) to predict the dynamics of the TCES internal state. Our proposed framework addresses three physical aspects to build the PINN: (1) we choose a Nonlinear Autoregressive Network with Exogeneous Inputs (NARX) with deeper recurrence to address the nonlinear latency; (2) we train the network in closed-loop to capture the long-term dynamics; and (3) we incorporate physical regularisation during its training, calculated based on discretized mole and energy balance equations. To train the network, we perform numerical simulations on an ensemble of system parameters to obtain synthetic data. Even though the suggested approach provides results with the error of $3.96$×${10}^{-4}$ which is in the same range as the result without physical regularisation, it is superior compared to conventional Artificial Neural Network (ANN) strategies because it ensures physical plausibility of the predictions, even in a highly dynamic and nonlinear problem. Consequently, the suggested PINN can be further developed for more complicated analysis of the TCES system.

## 1. Introduction

#### 1.1. Thermochemical Energy Storage

_{2}as the side effect of the reaction has to be liquefied and results in a high parasitic loss [6,14]. Additionally, there are many more criteria to consider, such as cyclability, reaction kinetics, energy density and, most importantly, safety issues. For comprehensive reviews of varying storage materials, we refer to [14,15,16].

_{2}) system. One experiment investigated the material parameters (such as heat capacity and density) and the reaction kinetics [17], another experiment focused on studying the operating range, efficiency and the cycling stability of the system [18], and there was also an experiment on the feasibility of integration with concentrated solar power plants [19]. All these experiments show that CaO/Ca(OH)

_{2}is a very promising candidate as TCES storage material. Furthermore, it is more attractive compared to other storage materials because it is nontoxic, relatively cheap and widely available [20,21]. The system stores the heat (is charged) during the dehydration of Ca(OH)

_{2}by injecting dry air with higher temperature. Charging results in an endothermic reaction along with the formation of H

_{2}O vapour and lower temperature at the outlet. It releases the heat (is discharged) during the hydration of CaO. This is achieved by injecting air with higher humidity (H

_{2}O content) and relatively lower temperature, resulting in an exothermic reaction (see Figure 1). Note that in this case, the hydration process occurs at lower temperature relative to the dehydration process, but both processes occur at high operating temperature [22]. The reversible reaction is written as:

#### 1.2. Physics-Inspired Artificial Neural Networks

#### 1.3. Approach and Contributions

_{2}and the NARX structure, as well as how we implement the physical knowledge into the regularisation. In Section 3, we discuss the results of our test, and Section 4 concludes the findings in the work.

## 2. Materials and Methods

#### 2.1. Governing Equations

_{2}TCES lab-scale reactor of 80 mm length along the flow direction as described in [20]. Assuming the system properties and parameters to be homogeneous, the simulation was conducted in 1D. The system was modelled as a nonisothermal single-phase multicomponent gas flow in porous media with chemical reaction acting as the source/sink terms and can be described using mole and energy balance equations. The inlet temperature and outlet pressure were fixed and defined with Dirichlet boundary conditions, and Neumann conditions were used to define the gas injection rates. The solid components forming the porous material are CaO and Ca(OH)

_{2}, and the gases are H

_{2}O and N

_{2}. The latter serves as an inert component to regulate the amount of H

_{2}O mole fraction in the injected gas. Full explanation in detail can be found in [20], and we offer a brief overview only in this section.

_{2}O, as is defined in the mole balance equation:

_{2}), T is temperature, ${h}_{g}$ is gas specific enthalpy and ${\lambda}_{eff}$ is the average thermal conductivity of both solid materials and gas components.

_{2}from CaO and H

_{2}O in the hydration process, energy is released into the system. Correspondingly, a decrease in the molar amount of CaO and H

_{2}O (and an increase in the molar amount of Ca(OH)

_{2}) results in a positive source term. The opposite holds for the dehydration process.

#### 2.2. Input and Output Variables

^{x}(Distributed and Unified Numerics Environment for Multi-{Phase, Component, Scale, Physics,...} [60]). As input to the simulator, we need the material parameters such as CaO density (${\rho}_{CaO}$), Ca(OH)

_{2}density (${\rho}_{Ca{\left(OH\right)}_{2}}$), CaO specific heat capacity (${c}_{p,CaO}$), Ca(OH)

_{2}specific heat capacity (${c}_{p,Ca{\left(OH\right)}_{2}}$), CaO thermal conductivity (${\lambda}_{CaO}$) and Ca(OH)

_{2}thermal conductivity (${\lambda}_{Ca{\left(OH\right)}_{2}}$); porous medium parameters such as absolute permeability (K) and porosity ($\varphi $); reaction kinetics parameters such as reaction rate constant (${k}_{r}$) and specific reaction enthalpy ($\Delta H$); and initial and boundary conditions such as N

_{2}molar inflow rate (${\dot{n}}_{{N}_{2},in}$), H

_{2}O molar inflow rate (${\dot{n}}_{{H}_{2}O,in}$), initial pressure (${p}_{init}$), outlet pressure (${p}_{out}$), initial temperature (${T}_{init}$), inlet temperature (${T}_{in}$) and initial H

_{2}O mole fraction (${x}_{{H}_{2}O,init}$).

_{2}O mole fraction). The behaviour of these variables, especially p, is very nonlinear. Therefore, it is interesting to see the prediction of the ANN for these nonlinear variables. Additionally, these variables are also important to assist in the system understanding. Therefore, our main output variables of interest were defined as in the following vector $\mathit{y}$ as a function of time t:

#### 2.3. Aligning the ANN Structure with Physical Knowledge of the System

#### 2.4. Physical Constraints in the Training Objective Function

_{2}O mole balance, because it has the most complete storage, flux and source/sink term (the solid components are assumed to be immobile, and N

_{2}is inert). The mole balance error can be written as:

_{2}O, the subscript $out$, $in$, $sto$ and q denote outflow, inflow, storage and source/sink term, respectively. The mole balance error was used as a contraint ${e}_{phy,i,t,1}$ and is equal to 0 if the mole balance is fulfilled. Putting this equation as a regularisation term penalises the network if the mole balance is not satisfied. Similarly, the corresponding energy balance equation also has to be fulfilled:

#### 2.5. Obtaining Optimum Network Parameters

## 3. Results and Discussion

^{x}and was simulated until $t=5000$ s to obtain an ensemble of target data $\mathit{y}\left(t\right)$. The governing equations are provided in Section 2.1. White noise was then added to these targets by generating normally distributed random numbers with zero mean and a standard deviation of 0.05 times the target values. Lastly, both exogenous inputs and targets were normalised to the range [−1, 1] to help the stability of the training [71]. Then, we set up the NARX ANN as described in Section 2. The training was then conducted using the built-in functions for NARX in the MATLAB Neural Network Toolbox [63], in which the loss function calculation was modified based on the equations provided in Section 2.5. It was conducted in batch mode both for dehydration and hydration process with a total of 100 training datasets.

#### 3.1. Influence of Feedback Delay

#### 3.2. SP Versus P Training Structure

#### 3.3. Physical Regularisation Improves Plausibility

- only MSE as the loss function (“MSE”), that is $L\left(\theta \right)={E}_{D}$,
- MSE combined with only L2 (“MSE + L2”), that is $L\left(\theta \right)=\alpha {E}_{\theta}+\beta {E}_{D}$,
- MSE combined with only physical regularisation (“MSE + PHY”), that is $L\left(\theta \right)=\beta {E}_{D}+{\sum}_{k}{\lambda}_{k}{E}_{phy,k}$, and
- MSE combined with both L2 and physical regularisation (“MSE + L2 + PHY”), that is $L\left(\theta \right)=\alpha {E}_{\theta}+\beta {E}_{D}+{\sum}_{k}{\lambda}_{k}{E}_{phy,k}$.

_{2}O. One important aspect that needs to be considered is that the ANN was trained using only 100 training datasets, compared to almost 500 parameters that exist inside the network. This made the optimisation problem an ill-posed one, leading to clear overfitting in the network with “MSE” and “MSE + L2”. Physical regularisation tackles this problem even for relatively sparse training data, which is valuable once experiments are costly, and therefore, not much data are often available to train the network.

## 4. Conclusions

_{2}TCES system internal states. Further work is required for more sophisticated analysis of the system, for example with spatial distribution of the internal system, dynamic exogeneous input and uncertainty quantification of the predictions.

## Availability of Data and Materials

## Author Contributions

## Funding

## Conflicts of Interest

## Abbreviations

Abbreviations: | |

ANN | Artificial Neural Network |

BR | Bayesian Regularisation |

LM | Levenberg Marquardt |

MSE | Mean Squared Error |

NARX | Nonlinear Autoregressive Network with Exogeneous Inputs |

ODE | Ordinary Differential Equation |

P | Parallel (network structure) |

PDE | Partial Differential Equation |

PINN | Physics Inspired Neural Network |

RNN | Recurrent Neural Network |

SP | Series Parallel (network structure) |

TCES | Thermochemical Energy Storage |

TCES-related parameters: | |

$\Delta H$ | Reaction enthalpy |

${\lambda}_{eff}$ | Average thermal conductivity |

$\mathsf{\mu}$ | Viscosity |

${\nu}_{s}$ | Solid volume fraction |

$\varphi $ | Porosity |

${\rho}_{m}$ | Mass density |

${\rho}_{n}$ | Molar density |

t | Time |

${c}_{p}$ | Specific heat capacity |

D | Effective diffusion coefficient |

h | Specific enthalpy |

K | Permeability |

${k}_{R}$ | Reaction constant |

p | Pressure |

q | Source/sink term |

T | Temperature |

u | Specific internal energy |

${x}_{g}$ | Gas molar fraction |

ANN-related parameters: | |

$\alpha $ | Normalising constant of L2 regularisation term |

$\beta $ | Normalising constant of data-related error |

$\mathit{H}$ | Hessian matrix |

$\mathit{I}$ | Identity matrix |

$\mathit{J}$ | Jacobian matrix |

$\widehat{y}\left(t\right)$ | Predicted value at time t |

$\lambda $ | Normalising constant of physical error |

$\mathsf{\mu}$ | Damping parameter for LM algorithm |

$\theta $ | Network parameter |

b | Network bias |

${d}_{y}$ | Feedback delay |

${E}_{D}$ | Data-related error |

${E}_{\theta}$ | Mean squared value of network parameters |

${E}_{phy}$ | Physical error |

${L}_{\theta}$ | Loss function |

N | Number of network parameters |

n | Number of training samples |

${n}_{t}$ | Number of time steps |

u | Exogeneous input |

w | Network weight |

$y\left(t\right)$ | Observed value at time t |

## Appendix A. List of Exogeneous Input and Its Distribution

**Table A1.**Input distributions for exogenous inputs, with $\mathsf{\mu}$ and $\sigma $ being the mean and standard deviation used to generate the data, respectively; while the superscript D and H refer to the dehydration and hydration process, respectively.

Exogenous inputs with normal distribution | |||||

u | Unit | ${\mathsf{\mu}}^{D}$ | ${\sigma}^{D}$ | ${\mathsf{\mu}}^{H}$ | ${\sigma}^{H}$ |

${\rho}_{CaO}$ | kg/m^{3} | 1656 | 25 | 1656 | 25 |

${\rho}_{Ca{\left(OH\right)}_{2}}$ | kg/m^{3} | 2200 | 25 | 2200 | 25 |

${p}_{init},$${p}_{out}$ | Pa | 1 × 10${}^{1}$ | $2.3\times {10}^{3}$ | $2\times {10}^{5}$ | $2.3\times {10}^{3}$ |

${T}_{init}$ | K | $573.15$ | 20 | $773.15$ | 20 |

${T}_{in}$ | K | $773.15$ | 20 | $573.15$ | 20 |

${\dot{n}}_{{N}_{2},in}$ | mol/s.m | $4.632$ | $0.25$ | $2.04$ | $0.15$ |

${\dot{n}}_{{H}_{2}O,in}$ | mol/s.m | $0.072$ | $0.01$ | $1.782$ | $0.15$ |

Exogenous inputs with lognormal distribution | |||||

u | Unit | ${\mathsf{\mu}}^{D}$ | ${\sigma}^{D}$ | ${\mathsf{\mu}}^{H}$ | ${\sigma}^{H}$ |

K | mD | log(5$\times {10}^{3}$) | $0.525$ | $log(5\times {10}^{3})$ | $0.525$ |

${k}_{R}$ | - | log(0.05) | $0.5$ | $log\left(0.2\right)$ | $0.5$ |

Exogenous inputs with shifted and scaled beta distribution | |||||

u | Unit | a | b | scale | shift |

${c}_{p,CaO}$ | J/kg.K | 7.1 | 2.9 | 300 | 700 |

${c}_{p,Ca{\left(OH\right)}_{2}}$ | J/kg.K | 7.6 | 2.4 | 350 | 1250 |

${\lambda}_{CaO}$ | W/m.K | 6.5 | 3.5 | 0.6 | - |

${\lambda}_{Ca{\left(OH\right)}_{2}}$ | W/m.K | 6.5 | 3.5 | 0.6 | - |

$\varphi $ | - | 8.5 | 1.5 | 0.825 | - |

$\Delta H$ | J/mol | 4.8 | 5.2 | 3$\times {10}^{4}$ | $9\times {10}^{4}$ |

${x}_{{H}_{2}O,init}$ | - | 76 | 85 | - | - |

## Appendix B. Mole and Energy Balance Error

_{2}O (assuming that the density can be calculated with ideal gas law) with the in- and outflowing moles ${n}_{H20,in}$ and ${n}_{H20,out}$, the storage term in the gaseous phase $\Delta {n}_{H20,sto}$ and the source/sink term $\Delta {n}_{H20,q}$. The in- and outflowing moles of H

_{2}O both are known values from the simulation or input data. The storage term $\Delta {n}_{H20,sto}$ can be calculated from the change in H

_{2}O molar fraction, ${\widehat{x}}_{g,{H}_{2}O}\left(t\right)-{\widehat{x}}_{g,{H}_{2}O}(t-1)$ multiplied with the H

_{2}O molar density and the pore volume. The complete definition is written as:

_{2}O is also formed. The molar amount of CaO is determined by the change in CaO volume fraction, ${\widehat{\nu}}_{CaO}\left(t\right)-{\widehat{\nu}}_{CaO}(t-1)$, multiplied with the molar density and the volume. The calculation for $\Delta {n}_{H20,q}$ is written as:

_{2}mass multiplied by the temperature and specific heat capacity. The definition is written as:

## Appendix C. Example of the Ann Prediction

**Figure A1.**An example of the best prediction sample (red) obtained using 2 hidden layers with 15 and 8 nodes at each layer and reference solution obtained from the physical model (blue).

## References

- Haas, J.; Cebulla, F.; Cao, K.; Nowak, W.; Palma-Behnke, R.; Rahmann, C.; Mancarella, P. Challenges and trends of energy storage expansion planning for flexibility provision in power systems—A review. Renew. Sustain. Energy Rev.
**2017**, 80, 603–619. [Google Scholar] [CrossRef][Green Version] - Møller, K.T.; Williamson, K.; Buckleyand, C.E.; Paskevicius, M. Thermochemical energy storage properties of a barium based reactive carbonate composite. J. Mater. Chem.
**2020**, 8, 10935–10942. [Google Scholar] [CrossRef] - Yuan, Y.; Li, Y.; Zhao, J. Development on Thermochemical Energy Storage Based on CaO-Based Materials: A Review. Sustainability
**2018**, 10, 2660. [Google Scholar] [CrossRef][Green Version] - Pardo, P.; Deydier, A.; Anxionnaz-Minvielle, Z.; Rougé, S.; Cabassud, M.; Cognet, P. A review on high temperature thermochemical heat energy storage. Renew. Sustain. Energy Rev.
**2014**, 32, 591–610. [Google Scholar] [CrossRef][Green Version] - Scapino, L.; Zondag, H.; Van Bael, J.; Diriken, J.; Rindt, C. Energy density and storage capacity cost comparison of conceptual solid and liquid sorption seasonal heat storage systems for low-temperature space heating. Renew. Sustain. Energy Rev.
**2017**, 76, 1314–1331. [Google Scholar] [CrossRef] - Schaube, F.; Wörner, A.; Tamme, R. High Temperature Thermochemical Heat Storage for Concentrated Solar Power Using Gas-Solid Reactions. J. Sol. Energy Eng.
**2011**, 133, 7. [Google Scholar] [CrossRef] - Carrillo, A.; Serrano, D.; Pizarro, P.; Coronado, J. Thermochemical heat storage based on the Mn
_{2}O_{3}/Mn_{3}O_{4}redox couple: Influence of the initial particle size on the morphological evolution and cyclability. J. Mater. Chem.**2014**, 2, 19435–19443. [Google Scholar] [CrossRef] - Carrillo, A.; Moya, J.; Bayón, A.; Jana, P.; de la Peña O’Shea, V.; Romero, M.; Gonzalez-Aguilar, J.; Serrano, D.; Pizarro, P.; Coronado, J. Thermochemical energy storage at high temperature via redox cycles of Mn and Co oxides: Pure oxides versus mixed ones. Sol. Energy Mater. Sol. Cells
**2014**, 123, 47–57. [Google Scholar] [CrossRef] - Carrillo, A.; Sastre, D.; Serrano, D.; Pizarro, P.; Coronado, J. Revisiting the BaO
_{2}/BaO redox cycle for solar thermochemical energy storage. Phys. Chem. Chem. Phys.**2016**, 18, 8039–8048. [Google Scholar] [CrossRef] - Muthusamy, J.P.; Calvet, N.; Shamim, T. Numerical Investigation of a Metal-oxide Reduction Reactor for Thermochemical Energy Storage and Solar Fuel Production. Energy Procedia
**2014**, 61, 2054–2057. [Google Scholar] [CrossRef][Green Version] - Block, T.; Knoblauch, N.; Schmücker, M. The cobalt-oxide/iron-oxide binary system for use as high temperature thermochemical energy storage material. Thermochim. Acta
**2014**, 577, 25–32. [Google Scholar] [CrossRef] - Michel, B.; Mazet, N.; Neveu, P. Experimental investigation of an innovative thermochemical process operating with a hydrate salt and moist air for thermal storage of solar energy: Global performance. Appl. Energy
**2014**, 129, 177–186. [Google Scholar] [CrossRef][Green Version] - Uchiyama, N.; Takasu, H.; Kato, Y. Cyclic durability of calcium carbonate materials for oxide/water thermo-chemical energy storage. Appl. Therm. Eng.
**2019**, 160, 113893. [Google Scholar] [CrossRef] - Yan, T.; Wang, R.; Li, T.; Wang, L.; Fred, I. A review of promising candidate reactions for chemical heat storage. Renew. Sustain. Energy Rev.
**2015**, 43, 13–31. [Google Scholar] [CrossRef] - Zhang, H.; Baeyens, J.; Cáceres, G.; Degréve, J.; Lv, Y. Thermal energy storage: Recent developments and practical aspects. Prog. Energy Combust. Sci.
**2016**, 53, 1–40. [Google Scholar] [CrossRef] - André, L.; Abanades, S.; Flamant, G. Screening of thermochemical systems based on solid-gas reversible reactions for high temperature solar thermal energy storage. Renew. Sustain. Energy Rev.
**2016**, 64, 703–715. [Google Scholar] [CrossRef] - Schaube, F.; Koch, L.; Wörner, A.; Müller-Steinhagen, H. A thermodynamic and kinetic study of the de- and rehydration of Ca(OH)
_{2}at high H_{2}O partial pressures for thermo-chemical heat storage. Thermochim. Acta**2012**, 538, 9–20. [Google Scholar] [CrossRef] - Schaube, F.; Kohzer, A.; Schütz, J.; Wörner, A.; Müller-Steinhagen, H. De- and rehydration of Ca(OH)
_{2}in a reactor with direct heat transfer for thermo-chemical heat storage. Part A: Experimental results. Chem. Eng. Res. Des.**2013**, 91, 856–864. [Google Scholar] [CrossRef] - Schmidt, M.; Gutierrez, A.; Linder, M. Thermochemical energy storage with CaO/Ca(OH)
_{2}- Experimental investigation of the thermal capability at low vapor pressures in a lab scale reactor. Appl. Energy**2017**, 188, 672–681. [Google Scholar] [CrossRef] - Shao, H.; Nagel, T.; Roßkopf, C.; Linder, M.; Wörner, A.; Kolditz, O. Non-equilibrium thermo-chemical heat storage in porous media: Part 2—A 1D computational model for a calcium hydroxide reaction system. Energy
**2013**, 60, 271–282. [Google Scholar] [CrossRef] - Nagel, T.; Shao, H.; Roßkopf, C.; Linder, M.; Wörner, A.; Kolditz, O. The influence of gas-solid reaction kinetics in models of thermochemical heat storage under monotonic and cyclic loading. Appl. Energy
**2014**, 136, 289–302. [Google Scholar] [CrossRef] - Bayon, A.; Bader, R.; Jafarian, M.; Fedunik-Hofman, L.; Sun, Y.; Hinkley, J.; Miller, S.; Lipiński, W. Techno-economic assessment of solid–gas thermochemical energy storage systems for solar thermal power applications. Energy
**2018**, 149, 473–484. [Google Scholar] [CrossRef] - Rezvanizaniani, S.M.; Liu, Z.; Chen, Y.; Lee, J. Review and recent advances in battery health monitoring and prognostics technologies for electric vehicle (EV) safety and mobility. J. Power Sources
**2014**, 256, 110–124. [Google Scholar] [CrossRef] - Mehne, J.; Nowak, W. Improving temperature predictions for Li-ion batteries: Data assimilation with a stochastic extension of a physically-based, thermo-electrochemical model. J. Energy Storage
**2017**, 12, 288–296. [Google Scholar] [CrossRef] - Seitz, G.; Helmig, R.; Class, H. A numerical modeling study on the influence of porosity changes during thermochemical heat storage. Appl. Energy
**2020**, 259, 114152. [Google Scholar] [CrossRef] - Roßkopf, C.; Haas, M.; Faik, A.; Linder, M.; Wörner, A. Improving powder bed properties for thermochemical storage by adding nanoparticles. Energy Convers. Manag.
**2014**, 86, 93–98. [Google Scholar] [CrossRef] - Abiodun, O.I.; Jantan, A.; Omolara, A.E.; Dada, K.V.; Mohamed, N.A.; Arshad, H. State-of-the-art in artificial neural network applications: A survey. Heliyon
**2018**, 4, e00938. [Google Scholar] [CrossRef] [PubMed][Green Version] - Raissi, M.; Perdikaris, P.; Karniadakis, G. Inferring solutions of differential equations using noisy multi-fidelity data. J. Comput. Phys.
**2017**, 335, 736–746. [Google Scholar] [CrossRef][Green Version] - Karamizadeh, S.; Abdullah, S.M.; Halimi, M.; Shayan, J.J.; Rajabi, M. Advantage and drawback of support vector machine functionality. In Proceedings of the 2014 International Conference on Computer, Communications, and Control Technology (I4CT), Langkawi, Malaysia, 2–4 September 2014; pp. 63–65. [Google Scholar]
- Aggarwal, C. Neural Networks and Deep Learning: A Textbook, 1st ed.; Springer: Cham, Switzerland, 2018. [Google Scholar]
- Oyebode, O.; Stretch, D. Neural network modeling of hydrological systems: A review of implementation techniques. Nat. Resour. Model.
**2018**, 32, e12189. [Google Scholar] [CrossRef][Green Version] - Chen, S.; Wang, Y.; Tsou, I. Using artificial neural network approach for modelling rainfall–runoff due to typhoon. J. Earth Syst. Sci.
**2013**, 122, 399–405. [Google Scholar] [CrossRef][Green Version] - Asadi, H.; Shahedi, K.; Jarihani, B.; Sidle, R.C. Rainfall-Runoff Modelling Using Hydrological Connectivity Index and Artificial Neural Network Approach. Water
**2019**, 11, 212. [Google Scholar] [CrossRef][Green Version] - Wunsch, A.; Liesch, T.; Broda, S. Forecasting groundwater levels using nonlinear autoregressive networks with exogenous input (NARX). J. Hydrol.
**2018**, 567, 743–758. [Google Scholar] [CrossRef] - Taherdangkoo, R.; Tatomir, A.; Taherdangkoo, M.; Qiu, P.; Sauter, M. Nonlinear Autoregressive Neural Networks to Predict Hydraulic Fracturing Fluid Leakage into Shallow Groundwater. Water
**2020**, 12, 841. [Google Scholar] [CrossRef][Green Version] - Kalogirou, S. Applications of artificial neural-networks for energy systems. Appl. Energy
**1995**, 67, 17–35. [Google Scholar] [CrossRef] - Bermejo, J.; Fernández, J.; Polo, F.; Márquez, A. A Review of the Use of Artificial Neural Network Models for Energy and Reliability Prediction. A Study of the Solar PV, Hydraulic and Wind Energy Sources. Appl. Sci.
**2019**, 9, 1844. [Google Scholar] [CrossRef][Green Version] - Yaïci, W.; Entchev, E.; Longo, M.; Brenna, M.; Foiadelli, F. Artificial neural network modelling for performance prediction of solar energy system. In Proceedings of the 2015 International Conference on Renewable Energy Research and Applications (ICRERA), Palermo, Italy, 22–25 November 2015; pp. 1147–1151. [Google Scholar]
- Kumar, A.; Zaman, M.; Goel, N.; Srivastava, V. Renewable Energy System Design by Artificial Neural Network Simulation Approach. In Proceedings of the 2014 IEEE Electrical Power and Energy Conference, Calgary, AB, Canada, 12–14 November 2014; pp. 142–147. [Google Scholar]
- Breiman, L. Statistical Modeling: The Two Cultures. Stat. Sci.
**2001**, 16, 199–231. [Google Scholar] [CrossRef] - Zhang, Z.; Beck, M.W.; Winkler, D.A.; Huang, B.; Sibanda, W.; Goyal, H. Opening the black box of neural networks: Methods for interpreting neural network models in clinical applications. Ann. Transl. Med.
**2018**, 6, 1–11. [Google Scholar] [CrossRef] - Raissi, M.; Perdikaris, P.; Karniadakis, G.E. Physics Informed Deep Learning (Part I): Data-driven Solutions of Nonlinear Partial Differential Equations. arXiv
**2017**, arXiv:1711.10561. [Google Scholar] - Raissi, M.; Perdikaris, P.; Karniadakis, G.E. Physics Informed Deep Learning (Part II): Data-driven Discovery of Nonlinear Partial Differential Equations. arXiv
**2017**, arXiv:1711.10566. [Google Scholar] - Doshi-Velez, F.; Kim, B. Towards A Rigorous Science of Interpretable Machine Learning. arXiv
**2017**, arXiv:1702.08608v2. [Google Scholar] - Miller, T. Explanation in Artificial Intelligence: Insights from the Social Sciences. arXiv
**2017**, arXiv:1706.07269. [Google Scholar] [CrossRef] - Karpatne, A.; Atluri, G.; Faghmous, J.; Steinbach, M.; Banerjee, A.; Ganguly, A.; Shekhar, S.; Samatova, N.; Kumar, V. Theory-guided Data Science: A New Paradigm for Scientific Discovery from Data. arXiv
**2017**, arXiv:1612.08544v2. [Google Scholar] [CrossRef] - Tartakovsky, A.; Marrero, C.; Perdikaris, P.; Tartakovsky, G.; Barajas-Solano, D. Learning Parameters and Constitutive Relationships with Physics Informed Deep Neural Networks. arXiv
**2018**, arXiv:1808.03398v2. [Google Scholar] - Karpatne, A.; Watkins, W.; Read, J.; Kumar, V. Physics-guided Neural Networks (PGNN): An Application in Lake Temperature Modeling. arXiv
**2018**, arXiv:1710.11431v2. [Google Scholar] - Wang, N.; Zhang, D.; Chang, H.; Li, H. Deep learning of subsurface flow via theory-guided neural network. J. Hydrol.
**2020**, 584, 124700. [Google Scholar] [CrossRef][Green Version] - Chen, S.; Billings, S.; Grant, P. Non-linear system identification using neural networks. Int. J. Control
**1990**, 51, 1191–1214. [Google Scholar] [CrossRef] - Zhang, X. Time series analysis and prediction by neural networks. Optim. Methods Softw.
**1994**, 4, 151–170. [Google Scholar] [CrossRef] - Buitrago, J.; Asfour, S. Short-Term Forecasting of Electric Loads Using Nonlinear Autoregressive Artificial Neural Networks with Exogenous Vector Inputs. Energies
**2017**, 10, 40. [Google Scholar] [CrossRef][Green Version] - Boussaada, Z.; Curea, O.; Remaci, A.; Camblong, H.; Bellaaj, N. A Nonlinear Autoregressive Exogenous (NARX) Neural Network Model for the Prediction of the Daily Direct Solar Radiation. Energies
**2018**, 11, 620. [Google Scholar] [CrossRef][Green Version] - Mellit, A.; Kalogirou, S. Artificial intelligence techniques for photovoltaic applications: A review. Prog. Energy Combust. Sci.
**2008**, 34, 574–632. [Google Scholar] [CrossRef] - Jia, X.; Karpatne, A.; Willard, J.; Steinbach, M.; Read, J.; Hanson, P.; Dugan, H.; Kumar, V. Physics Guided Recurrent Neural Networks For Modeling Dynamical Systems: Application to Monitoring Water Temperature And Quality In Lakes. arXiv
**2018**, arXiv:1810.02880. [Google Scholar] - Levenberg, K. A method for the solution of certain non-linear problems in least squares. Q. Appl. Math.
**1944**, 2, 164–168. [Google Scholar] [CrossRef][Green Version] - Marquardt, D. An Algorithm for Least-Squares Estimation of Nonlinear Parameters. J. Soc. Ind. Appl. Math.
**1963**, 11, 431–441. [Google Scholar] [CrossRef] - Yu, H.; Wilamowski, B.M. Levenberg-Marquardt Training. In The Industrial Electronics Handbook—Intelligent Systems, 2nd ed.; Wilamowski, B., Irwin, J., Eds.; CRC Press: Boca Raton, FL, USA, 2011; Volume 5, Chapter 12. [Google Scholar]
- Banerjee, I. Modeling Fractures in a CaO/Ca(OH)
_{2}Thermo-chemical Heat Storage Reactor. Master’s Thesis, Universität Stuttgart, Stuttgart, Germany, 2018. [Google Scholar] - Koch, T.; Gläser, D.; Weishaupt, K.; Ackermann, S.; Beck, M.; Becker, B.; Burbulla, S.; Class, H.; Coltman, E.; Fetzer, T.; et al. Release 3.0.0 of DuMux: DUNE for Multi-{Phase, Component, Scale, Physics,...} Flow and Transport in Porous Media; Zenodo: Geneva, Switzerland, 2019. [Google Scholar] [CrossRef]
- Peinado, J.; Ibáñez, J.; Arias, E.; Hernández, V. Adams-Bashforth and Adams-Moulton methods for solving differential Riccati equations. Comput. Math. Appl.
**2010**, 60, 3032–3045. [Google Scholar] [CrossRef][Green Version] - Tutueva, A.; Karimov, T.; Butusov, D. Semi-Implicit and Semi-Explicit Adams-Bashforth-Moulton Methods. Mathematics
**2020**, 8, 780. [Google Scholar] [CrossRef] - Beale, M.; Hagan, M.; Demuth, H. Deep Learning Toolbox™ User’s Guide (R2019a); The MathWorks, Inc.: Natick, MA, USA, 2019. [Google Scholar]
- Krogh, A.; Hertz, J.A. A Simple Weight Decay Can Improve Generalization. In Advances in Neural Information Processing Systems 4; Moody, J.E., Hanson, S.J., Lippmann, R.P., Eds.; Morgan-Kaufmann: Denver, CO, USA, 1991; pp. 950–957. [Google Scholar]
- Goodfellow, I.; Bengio, Y.; Courville, A. Deep Learning; MIT Press: Cambridge, MA, USA, 2016; Available online: http://www.deeplearningbook.org (accessed on 15 April 2019).
- MacKay, D. Bayesian Interpolation. Neural Comput.
**1991**, 4, 415–447. [Google Scholar] [CrossRef] - Sariev, E.; Germano, G. Bayesian regularized artificial neural networks for the estimation of the probability of default. Quant. Financ.
**2020**, 20, 311–328. [Google Scholar] [CrossRef] - Foresee, F.D.; Hagan, M. Gauss-Newton approximation to Bayesian learning. IEEE
**1997**, 3, 1930–1935. [Google Scholar] [CrossRef] - Nguyen, D.; Widrow, B. Improving the learning speed of 2-layer neural networks by choosing initial values of the adaptive weights. IEEE
**1990**, 3, 21–26. [Google Scholar] [CrossRef] - Mittal, A.; Singh, A.P.; Chandra, P. A Modification to the Nguyen–Widrow Weight Initialization Method. In Intelligent Systems, Technologies and Applications; Springer: Singapore, 2020; pp. 141–153. [Google Scholar]
- Zhang, G.; Patuwo, B.E.; Hu, M. Forecasting with artificial neural networks: The state of the art. Int. J. Forecast.
**1998**, 14, 35–62. [Google Scholar] [CrossRef] - Stathakis, D. Adams-Bashforth and Adams-Moulton methods for solving differential Riccati equations. Int. J. Remote Sens.
**2009**, 30, 2133–2147. [Google Scholar] [CrossRef] - Higham, D.J.; Trefethen, L.N. Stiffness of ODEs. BIT Numer. Math.
**1993**, 33, 285–303. [Google Scholar] [CrossRef] - Härter, F.P.; de Campos Velho, H.F.; Rempel, E.L.; Chian, A.C.L. Neural networks in auroral data assimilation. J. Atmos. Sol.-Terr. Phys.
**2008**, 70, 1243–1250. [Google Scholar] [CrossRef] - Awolusi, T.F.; Oke, O.L.; Akinkurolere, O.O.; Sojobi, A.O.; Aluko, O.G. Performance comparison of neural network training algorithms in the modeling properties of steel fiber reinforced concrete. Heliyon
**2019**, 5, 1–27. [Google Scholar] [CrossRef][Green Version]

**Figure 1.**A simplified schematic of a Thermochemical Energy Storage (TCES) system with CaO/Ca(OH)

_{2}as the storage material during (

**a**) dehydration and (

**b**) hydration process.

**Figure 2.**Difference between (

**a**) SP and (

**b**) P architecture. Here, ${\mathit{y}}_{t}\dots {\mathit{y}}_{t-{d}_{y}}$ (in blue) are the given data, while ${\widehat{\mathit{y}}}_{t}\dots {\widehat{\mathit{y}}}_{t-{d}_{y}}$ (in red) are the ANN predictions.

**Figure 6.**Training time, gradient and performance for P (Parallel) and SP (Series-Parallel) structure.

**Figure 9.**Worst prediction sample (red) obtained with “Mean Squared Error—MSE” regularisation method and reference solution obtained from the physical model (blue).

**Figure 10.**Worst prediction sample (red) obtained with “MSE + L2” regularisation method and reference solution obtained from the physical model (blue).

**Figure 11.**Worst prediction sample (red) obtained with “MSE + PHY” regularisation method and reference solution obtained from the physical model (blue).

**Figure 12.**Worst prediction sample (red) obtained with “MSE + L2 + PHY” regularisation method and reference solution obtained from the physical model (blue).

**Table 1.**Physical constraints in training: loss term used in Equation (16).

k | Equation: ${\mathit{e}}_{\mathit{phy},\mathit{i},\mathit{t},\mathit{k}}=\dots $ | |
---|---|---|

Dehydration | Hydration | |

1 | ${e}_{MB}(i,t)$ | |

2 | ${e}_{EB}(i,t)$ | |

3 | $ReLU(-{\widehat{\nu}}_{CaO}(i,t))$ | |

4 | $ReLU(-{\widehat{x}}_{{H}_{2}O}(i,t))$ | |

5 | $ReLU(\varphi +{\widehat{\nu}}_{CaO}(i,t)-1)$ | |

6 | $ReLU(\widehat{p}(i,t-1)-\widehat{p}(i,t))$ | $ReLU(\widehat{p}(i,t)-\widehat{p}(i,t-1))$ |

7 | $ReLU(\widehat{T}(i,t-1)-\widehat{T}(i,t))$ | $ReLU(\widehat{T}(i,t)-\widehat{T}(i,t-1))$ |

8 | $ReLU({\widehat{\nu}}_{CaO}(i,t-1)-{\widehat{\nu}}_{CaO}(i,t))$ | $ReLU({\widehat{\nu}}_{CaO}(i,t)-{\widehat{\nu}}_{CaO}(i,t-1))$ |

9 | $ReLU(\widehat{T}(i,t)-{T}_{in})$ | $ReLU({T}_{in}-\widehat{T}(i,t))$ |

Loss Function | ${\mathit{MSE}}_{\mathit{train}}$ | ${\mathit{MSE}}_{\mathit{test}}$ |
---|---|---|

MSE | $8.45\times {10}^{-3}$ | $2.81\times {10}^{-3}$ |

MSE + L2 | $9.01\times {10}^{-3}$ | $3.96\times {10}^{-4}$ |

MSE + PHY | $8.68\times {10}^{-3}$ | $3.83\times {10}^{-3}$ |

MSE + L2 + PHY | $8.43\times {10}^{-3}$ | $3.96\times {10}^{-4}$ |

© 2020 by the authors. Licensee MDPI, Basel, Switzerland. This article is an open access article distributed under the terms and conditions of the Creative Commons Attribution (CC BY) license (http://creativecommons.org/licenses/by/4.0/).

## Share and Cite

**MDPI and ACS Style**

Praditia, T.; Walser, T.; Oladyshkin, S.; Nowak, W. Improving Thermochemical Energy Storage Dynamics Forecast with Physics-Inspired Neural Network Architecture. *Energies* **2020**, *13*, 3873.
https://doi.org/10.3390/en13153873

**AMA Style**

Praditia T, Walser T, Oladyshkin S, Nowak W. Improving Thermochemical Energy Storage Dynamics Forecast with Physics-Inspired Neural Network Architecture. *Energies*. 2020; 13(15):3873.
https://doi.org/10.3390/en13153873

**Chicago/Turabian Style**

Praditia, Timothy, Thilo Walser, Sergey Oladyshkin, and Wolfgang Nowak. 2020. "Improving Thermochemical Energy Storage Dynamics Forecast with Physics-Inspired Neural Network Architecture" *Energies* 13, no. 15: 3873.
https://doi.org/10.3390/en13153873