Figure 1.
Schematic of a physics-informed neural network (PINN). A fully connected neural network with time and space inputs is constructed to approximate the solutions of , which is then used to calculate the residual loss, . The residual loss comprises the differential equation’s residual , the initial condition’s residual , and the boundary condition’s residual . The derivatives of are computed by automatic differentiation. The parameters of the fully connected network are trained using gradient-descent methods based on the back-propagation process.
Figure 1.
Schematic of a physics-informed neural network (PINN). A fully connected neural network with time and space inputs is constructed to approximate the solutions of , which is then used to calculate the residual loss, . The residual loss comprises the differential equation’s residual , the initial condition’s residual , and the boundary condition’s residual . The derivatives of are computed by automatic differentiation. The parameters of the fully connected network are trained using gradient-descent methods based on the back-propagation process.
Figure 2.
FNN architecture for the implementation of the PINN surrogate model. The FNN consists of the input layer, the hidden layers (composed of weights, ; biases, ; and activation function, ), and an output layer.
Figure 2.
FNN architecture for the implementation of the PINN surrogate model. The FNN consists of the input layer, the hidden layers (composed of weights, ; biases, ; and activation function, ), and an output layer.
Figure 3.
Schematics of PUR-1.
Figure 3.
Schematics of PUR-1.
Figure 4.
S curve of the SS2 control rod (cm) with respect to the reactivity (pcm) is depicted by the blue-colored line. Curve fitting of the obtained S curve is depicted by the orange-colored dotted line. The error bar due to uncertainty is depicted by the black-colored line.
Figure 4.
S curve of the SS2 control rod (cm) with respect to the reactivity (pcm) is depicted by the blue-colored line. Curve fitting of the obtained S curve is depicted by the orange-colored dotted line. The error bar due to uncertainty is depicted by the black-colored line.
Figure 5.
Reactivity insertion at each second in the time interval of until steady state is reached.
Figure 5.
Reactivity insertion at each second in the time interval of until steady state is reached.
Figure 6.
Schematic of PINN for solving the PKEs with ICs. The input to surrogate network is time, t, and the output is the solution vector . The residual network tests if the solution vector satisfies the PKE governing equations and the ICs.
Figure 6.
Schematic of PINN for solving the PKEs with ICs. The input to surrogate network is time, t, and the output is the solution vector . The residual network tests if the solution vector satisfies the PKE governing equations and the ICs.
Figure 7.
History of the loss function of PINN after 65,000 iterations for interpolation. (a) The training (blue) and testing (orange) loss is the summed-up MSE loss of all terms in the PKEs. (b) The test metric is the relative error for the first ODE term.
Figure 7.
History of the loss function of PINN after 65,000 iterations for interpolation. (a) The training (blue) and testing (orange) loss is the summed-up MSE loss of all terms in the PKEs. (b) The test metric is the relative error for the first ODE term.
Figure 8.
(a) Solution of neutron density concentration, , in the time interval , along with PINN prediction. (b) Residual error plot of , which shows the error margin of the predictions. (c,e,g,i,k,m) Solutions of delayed neutron precursor density concentration of , , , , , and , respectively, in the time interval , along with PINN prediction. (d,f,h,j,l,n) Residual error plot of , , , , , and , respectively.
Figure 8.
(a) Solution of neutron density concentration, , in the time interval , along with PINN prediction. (b) Residual error plot of , which shows the error margin of the predictions. (c,e,g,i,k,m) Solutions of delayed neutron precursor density concentration of , , , , , and , respectively, in the time interval , along with PINN prediction. (d,f,h,j,l,n) Residual error plot of , , , , , and , respectively.
Figure 9.
History of the loss function of PINN after 65,000 iterations for extrapolation. (a) The training (blue) and testing (orange) loss is the summed-up MSE loss of all terms in the PKEs. (b) The test metric is the relative error for the first ODE term.
Figure 9.
History of the loss function of PINN after 65,000 iterations for extrapolation. (a) The training (blue) and testing (orange) loss is the summed-up MSE loss of all terms in the PKEs. (b) The test metric is the relative error for the first ODE term.
Figure 10.
(a) Solution of neutron density concentration, , in the time interval , along with PINN prediction. (b) Residual error plot of , which shows the error margin of the predictions. (c,e,g,i,k,m) Solutions of delayed neutron precursor density concentration of , , , , , and , respectively, in the time interval , along with PINN prediction. (d,f,h,j,l,n) Residual error plot of , , , , , and , respectively. The training data for interpolation are in the time interval , whereas the training data for extrapolation are in the time interval .
Figure 10.
(a) Solution of neutron density concentration, , in the time interval , along with PINN prediction. (b) Residual error plot of , which shows the error margin of the predictions. (c,e,g,i,k,m) Solutions of delayed neutron precursor density concentration of , , , , , and , respectively, in the time interval , along with PINN prediction. (d,f,h,j,l,n) Residual error plot of , , , , , and , respectively. The training data for interpolation are in the time interval , whereas the training data for extrapolation are in the time interval .
Table 1.
Parameters of PUR-1.
Table 1.
Parameters of PUR-1.
Variable | Value (s) |
---|
Term | 1 | 2 | 3 | 4 | 5 | 6 |
| 0.000213 | 0.001413 | 0.001264 | 0.002548 | 0.000742 | 0.000271 |
| 0.01244 | 0.0305 | 0.1114 | 0.3013 | 1.1361 | 3.013 |
Table 2.
Reactivity values during the start-up of PUR-1 by withdrawing the SS2 control rod.
Table 2.
Reactivity values during the start-up of PUR-1 by withdrawing the SS2 control rod.
SS2 (cm) | Reactivity (pcm) | Uncertainty (pcm) |
---|
0 | −1168.496 | 97 |
10 | −983.580 | 74 |
20 | −870.513 | 80 |
30 | −431.857 | 78 |
40 | −31.009 | 90 |
Table 3.
Workflow for solving PKEs using PINNs in DeepXDE framework.
Table 3.
Workflow for solving PKEs using PINNs in DeepXDE framework.
Step # | Procedure |
---|
Step 1 | Specify the computational domain using the geometry module. |
Step 2 | Specify the system of ODEs using the grammar of Tensorflow. |
Step 3 | Specify the initial conditions using the IC module. |
Step 4 | Combine the geometry, system of ODEs, and initial conditions together into data.PDE. Specify the training data and the training distribution, and set the number of points to be sampled. |
Step 5 | Construct a feed-forward neural network using the maps module. |
Step 6 | Define a Model by combining the system of ODEs problem in Step 4 and the neural network in Step 5. |
Step 7 | Call Model.compile to set the optimization hyperparameters, such as optimizer and learning rate. The weights in Equation (4) can be set here by loss_weights. |
Step 8 | Call Model.train to train the network from random initialization. The training behavior can be monitored and modified using callbacks. |
Step 9 | Call Model.predict to predict the PDE solution at different locations. |
Table 4.
Case 1. Percentage error of extrapolation of the training data in time interval using five testing points in time interval .
Table 4.
Case 1. Percentage error of extrapolation of the training data in time interval using five testing points in time interval .
Variable | Value (%) |
---|
Test Point | 1 | 2 | 3 | 4 | 5 |
---|
| 1.237 | 1.382 | 1.468 | 1.488 | 1.434 |
| 0.237 | 0.109 | 0.037 | 0.196 | 0.365 |
| 0.144 | 0.443 | 0.748 | 1.056 | 1.360 |
| 1.378 | 1.633 | 1.871 | 2.082 | 2.260 |
| 1.067 | 1.173 | 1.243 | 1.268 | 1.241 |
| 1.410 | 1.490 | 1.513 | 1.481 | 1.383 |
| 1.559 | 1.868 | 2.118 | 2.298 | 2.404 |
Table 5.
Case 2. Percentage error of extrapolation of the training data in time interval using five testing points in time interval .
Table 5.
Case 2. Percentage error of extrapolation of the training data in time interval using five testing points in time interval .
Variable | Value (%) |
Test Point | 1 | 2 | 3 | 4 | 5 |
| 2.564 | 1.434 | 1.954 | 2.277 | 2.361 |
| 1.190 | 0.032 | 0.478 | 0.994 | 1.565 |
| 1.181 | 0.167 | 1.045 | 1.955 | 2.877 |
| 2.500 | 1.503 | 2.456 | 3.345 | 4.138 |
| 2.755 | 1.654 | 2.386 | 2.986 | 3.416 |
| 2.433 | 1.197 | 1.675 | 1.980 | 2.072 |
| 2.560 | 1.280 | 1.683 | 1.904 | 1.902 |
Table 6.
Case 3. Percentage error of extrapolation of the training data in time interval using five testing points in time interval .
Table 6.
Case 3. Percentage error of extrapolation of the training data in time interval using five testing points in time interval .
Variable | Value (%) |
---|
Test Point | 1 | 2 | 3 | 4 | 5 |
---|
| 1.841 | 2.630 | 3.971 | 5.424 | 6.747 |
| 0.787 | 0.551 | 0.436 | 1.779 | 3.276 |
| 0.256 | 0.915 | 2.397 | 4.249 | 6.248 |
| 1.665 | 2.473 | 4.083 | 5.996 | 7.963 |
| 1.761 | 2.413 | 3.824 | 5.476 | 7.107 |
| 1.645 | 2.205 | 3.457 | 4.891 | 6.257 |
| 2.076 | 3.014 | 4.469 | 6.022 | 7.452 |