Deep Learning Neural Network Algorithm for Computation of SPICE Transient Simulation of Nonlinear Time Dependent Circuits

: In this paper, a special method based on the neural network is presented, which is conveniently used to precompute the steps of numerical integration. This method approximates the behaviour of the numerical integrator with respect to the local truncation error. In other words, it allows the precomputation of the individual steps in such a way that they do not need to be estimated by an algorithm but can be directly estimated by a neural network. Experimental tests were performed on a series of electrical circuits with different component parameters. The method was tested for two integration methods implemented in the simulation program SPICE (Trapez and Gear)


Introduction
There is currently a huge expansion in the development of artificial intelligence (AI), neural network (NN), and machine learning (ML). It is amazing how far the field has come in the last decade. Nowadays, we encounter algorithms based on NN in everyday devices such as cars, cell phones, or smart home systems. They have become an integral part of our lives. They help us with navigation, speech recognition, coloring black and white pictures, solving hard optimization problems, and more [1,2]. NN and AI algorithms slowly make their way into disciplines long believed to be the unique domain of humans, such as literature and art [3]. A whole new segment of the industry has emerged, offering solutions that analyze, learn, and generalize work procedures of company employees, and then take over the activities that AI would be capable of solving on its own [4]. In this paper, we present a method for improving the computation performance of Transient Analysis of Simulation Program with Integrated Circuit Emphasis (SPICE) simulators [5] based on Numerical Integration (NI) step estimation with the utilization of NN.
Numerical integration itself has many aspects that can be improved and changed in some way. One can find papers dealing with the complete replacement of numerical integration by another method, in this case, based on the neural network [6]. By modifying the calculation of the predictor and the corrector [7,8], papers publishing different methods for determining the integration step [9,10], and a paper modifying the calculation method using logarithmic damping [11], all of these papers, as well as this paper, share the common goal of speeding up or refining the calculation of numerical integration.
Although many papers have been published on this topic, few methods are suitable for the simulation of electrical circuits. For example, the method presented in the paper [6] would not be suitable and this is due to several aspects. The first one is the low accuracy of the calculation and the second is the speed which is key to the simulation. In this paper, we focused on modifying the integration step mainly also because the error of computation can be easily determined and fixed. That is, an incorrect determination could at most affect the speed of the computation, but no longer its stability and accuracy. These are key properties of the method provided in this article. A very interesting solution to the integration step calculation can be found in the article [9]. The authors managed to build a very fast algorithm that achieves very good results even for very poorly simulated ones, which switching networks certainly belong to. The problem, however, works well for the low orders of the NI method. The method loses accuracy on smooth parts which in turn affects the increase in the number of integration steps. The paper [10] presents a method for calculating the integration step specifically designed for circuits where the stimulus value is turned off for some time. The algorithm even achieves up to twice the convergence of the standard SPICE program. In our work, we have not been able to achieve such an improvement, but our method proved to be more versatile. The ultimate goal of our proposed method is to increase speed of computation by lowering number of iteration step that diverged altogether with preserved accuracy of the original solution.
The proposed solution was developed exclusively for use in SPICE-like electrical circuit simulation. It has been experimentally tested by implementing it in the circuit simulation program NgSpice [12], which is a freely available open-source alternative to SPICE simulator. It offers a variety of simulation types. From basic Direct Current (DC), Operation Point (OP), Sensitivity Analysis through Monte Carlo, and Noise Analysis [13]. The main motivation to use NgSpice was that its source code is available and thus we could modify them for our purposes.
As mentioned above, in this work we mainly focused on Transient Analysis (TRANS), which simulates the behavior of a circuit over time. In the computational core of the SPICE program the implemented mathematical algorithms are shared among a variety of simulations. It is thus possible to reuse our proposed method for other types of simulations with only a slight modification.

SPICE Simulation Process
Before each simulation, SPICE performs the so-called Modified Nodal Formulation (MNF) [14]. This is the process of converting the circuit definition and its simulation into a set of equations. These equations can form a set of linear, non-linear, or even timedependent equations. Typically, the product of this formulation is a system of nonlinear differential equations of the form where x is the vector of unknown circuit variables,ẋ is the derivative of vector x, with respect to time t. In real implementation the values of x andẋ depend on current state of the simulation. The first step of the simulation is usually direct current (DC) analysis, that computes operating point of a non-linear circuit. It must occur first to get the linear characteristics of the nonlinear models. However, this step is not always necessary and in these cases, the DC calculation is usually skipped. Very rarely are all devices in a circuit linear, therefore the simulator performs the linearization in the next step using the iterative Newton-Raphson (NR) method and possibly implicit NI. In our paper, we focus specifically on the case where the time-dependent variables need to be removed from the circuit using NI before the algorithm can try to linearize the system using an iterative method. Thus, it must be prepared for calculation by the factorization method. The generalized equation for value x n ≈ x(t n ) where t n = t 0 + nh, with h denoting the step size, can be written as where coefficients a and b determine the type of integration method and whether the method is explicit or implicit. This step transforms the nonlinear differential equations for each point into a series of non-invariant linear equations. From the problem description it must already be clear that an error in the linear integration will affect the computation of all subsequent steps, and thus a possible restart of a failed step will negatively affect the overall computational performance of the simulation. Two NI methods are present in the basic version of SPICE. The first NI method is Trapezoidal (TRAPEZ) [15]. The Trapezoidal integration is a second-order method that can be derived based on the observation that a more accurate solution can be obtained if the average of the slopes t n and t n+1 are used as compared to either one or the other: The second method available in SPICE is the so-called Backward Differentiation Formula (BDF). In SPICE it is referred to as the GEAR method [16]. All the stable steps of the GEAR method can be derived from the equation where a and β are coefficients of the multistep BDF method defined up to order s < 7.
As can be seen from Algorithm 1, there is a relatively high chance that a given step will not be accepted, in which case the estimate will have to be rejected and the entire simulation step recalculated with a finer or coarser step. It is important to note that while the above equations work with variables for simplicity, in a real situation, the computations are accompanied by a number of rather demanding computational operations over sparse matrices. Such as pivoting, reordering, and LUF transformations [13]. Incorrect estimates of the integration step significantly slow down the overall computation time of the simulation.

Algorithm 1 Setting New Integration
Step

Neural Network Step Estimator Algorithm
The main goal of the proposed algorithm is to replace the function of the integration step estimation with the Neural Network Step Estimator (NNSE) trained for the calculation of the integration step of specific circuit during a transient simulation. The goal is to decrease the number of iterations by introducing an adaptive step size to the simulation problem. It should be noted that for the final implementation in program SPICE, an extra algorithm for circuit classification would be needed that selects the correct trained NN based on the circuit topology and simulation type. The classification function is not part of the text and it was determined manually. The NN was trained for specific circuits, and then only the circuit settings varied. Therefore, it was not necessary to adequately classify the circuits. Training a general-purpose algorithm that would be applicable to any kind of simulation is up for wider discussion.
The standard SPICE algorithm calculates the step size to achieve the target LTE by the following equation: from which the following step is determined as where RELTOL is relative error tolerance allowed (Default = 0.001), ABSTOL is absolute error tolerance allowed (Default = 10 −12 ) and TRTOL is transient error tolerance (Default = 7). DD 3 12 is divided difference, a maximum value of heuristically scaled solutions of x andẋ. The recursive formula for divided differences is where DD 1 is the numerical approximation of the derivative of x between t n and t n+1 The algorithm evaluates how the chosen integration step by NNSE reflects the rate of change of the simulation. If the step were too small, it would increase the computation time. Conversely, for a too large step, the calculation would lose accuracy.
To replace Equation (5) we chose standard Multilayer Perceptron (MPL) [17] definition of NN that is schematically given in Figure 1. The neurons of the model are connected through their weights. The input layer information is passed through the hidden layers and then the output vector is computed from them. Each neuron performs a weighted summation of its inputs which are basically the outputs of the neurons on the previous layers. This can be expressed by the equation where w im is weight and b i is threshold value. The input signals are cumulative and the neural block is activated by the the activation function, and has only one output y m : Output layer Hidden layer The ReLU activation function [18] proved to be the best for our needs. Its only limitation is its discontinuity around 0. The function can be expressed by the following algorithm therefore, the calculation of the NI step has changed to the equation and t n+1 = t n +ĥ n+1 (13) whereĥ characterizes the estimated value of the next NI step by NNSE. The implementation does not affect the basic settings of the method coefficients. Thus, the same properties apply to both methods for the defined stability region of the method. However, the size of the integration step determines whether a given solution is within this region, as can be seen in the following equation for definition of the stability region of TRAPEZ method where z = hk, andẋ = k · x. It is still necessary to check whether the solution is sufficiently close to the right solution. Thus every numerical step computed by the NNSE method, must by still checked for its accuracy. SPICE implements an error-based time-adaptive stepping algorithm that determines Local Truncation Error LTE = f ( ) [19] via Equation (17). It is based on a function of the currents I and charges Q of the capacitors (or fluxes and voltages of the inductors). It is the maximum of two errors: current error ( I ) and charge error Q .

Training of Neural Network Step Estimator Algorithm
Levenberg-Marquardt back propagation was used for NN training [20]. It is designed to minimize the mean-square error between the actual and predicted output of the MPL. Initially, the weights are determined randomly, then the algorithm computes the meansquare error repeatedly for each combination of input and output. Subsequently, the error is propagated back and used to readjust the weights and threshold. This is shown in Figure 2.
For normalization of input values of trained network, the standard normalization algorithm was used where x max and x min are the highest values before and after normalization, respectively.

Implementation
The TensorFlow (TF) library [21] with Keras library [22] was used to implement the NN. Several NN were created with a huge set of training values. It was achieved by simulating different circuits, which were generated with different constellation of parameters of the electrical components. The parameters where randomly generated within specific range with maintain of their standard functionality. Subsequently, algorithm training was performed for several well known circuits, these were Voltage Controlled CMOS Oscillator, CMOS VCO, Ring Oscillator, MOS Amplifier, JFET Amplifier, and CMOS Multiplier (as also shown in Figures 3 and 4 and Table 1). For all simulations, the initial Transient Analysis was calculated. This was then repeated for randomly changed component parameters with variation of the components ranged from 0% to 2%. Main parameters that varied in circuits were source voltages, resistive, capacitance, and inductive parameters.     The NN was trained through the assisted learning process, where pre-computed values of the integration step were presented to the NN. It learned from them the best approximation to the problem. Subsequently, the trained network was used to estimate step size on simulations with component parameters variations from 0% to 25% percent. The setting of the weights was always determined randomly at the beginning of the learning process. The trained NN was implemented in NgSpice as it is shown in the Algorithm 2. Optional parameter .OPTIONS NNSA=(NN id) was added to the simulation configuration. The values from the NgSpice program were shared during the simulation to the TF NN algorithm via Unix Named pipes. This step negatively affected the speed of the simulation computation and should be replaced in the final implementation by a native implementation of the NN in the NgSpice code. It should be noted that to avoid non-convergence we implemented a correction algorithm which, after an unsuccessful attempt, calculates the next stopping point according to the standard method (based on LTE). This circuit is multiplier which origins from article [23], that contains the circuit diagram, and the model and control parameters are defined in Algorithm 3. It is a lowvoltage low-power CMOS four-quadrant microwave multiplier. It use by semi-empirical models of transistors of level 3. As could be seen circuit is designed very precisely and therefore simulation was successful in almost all iterations.

CMOS VCO
This is the largest simulated circuit based on the work published in paper [24]. Basic component models of NMOS and PMOS transistors are completely defined in Algorithm 4.

JFET Amplifier
This is very simple circuit that can be fully defined here, it is defined in Algorithm 5.

Ring Oscillator
The oscillator consists of a chain of odd number of CMOS inverters that generate an oscillation with a period T equal to 2* N* tp, where N is the number of inverters, and tp is the propagation delay (2 because each inverter switches twice during one period). The schematic and SPICE models are accessible through web link [25].

MOS Amplifier
This circuit is MOS transistor amplifier. It origins from NgSpice testing repository and can be downloaded with actual version of NgSpice21. It is modelled by MOS transistors with substrate doping NSUB, gate-source overlap capacitor CGSO, gate-drain overlap cap CGDO, surface state density NSS and surface mobility UO, oxide thickness TOX, lateral diffusion LD, and critical field exponent for mobility degradation UEXP. All the analysis and model parameters are defined in Algorithm 6.  Figure 3 shows a comparison of the individual methods, namely the standard TRAPEZ method and GEAR method. In addition, the predictor-corrector scheme was also used for circuits. The graph shows the compared values for the algorithms used on circuits with component parameter variances of 1%, 5%, and 10% from the original trained network. It can be seen that the NNSE augmented simulations perform well when the circuit similarity is high, i.e., with a variation of circuit variables up to a maximum of 5% percent. In those cases NNSE helped in most cases to reduce the number of iterations. The problem arises when the variance is above 10% and the number of inconsistencies starts to grow up to the point where the standard stepping mechanism takes over. For very small circuit changes, the NNs performed noticeably better than standard algorithms that had to adaptively calculate an estimated integration step. In the case of the NN, this step was dropped and the network only quantified the error of the step change for the non-convergence case. The NNSE performed very well up to the variation of circuit quantities of 5%. Beyond this limit, problems began to manifest when the integration step had to be recalculated because of rejection by LTE or non-convergency. Above about 20%, the circuit definition was so different that the algorithm became unstable and step rejection algorithm had to start correcting the NI by repeatedly reducing the size of the integration step. The values in the table characterize the number of iterations required to complete the transient simulation of a given circuit. For NN simulations, these are then integer counts of the average number of iterations for the circuit. In the table, TR denotes the TRANS method, TRP denotes TRANS with predictor-corrector, and GR is the GEAR method. Suffixing with the letter N indicates NNSE step estimation. The percentages then denote the variance of the component parameters in the circuit.

Results
The bolded values highlight the algorithm that achieved the best result, i.e., the lowest number of iterations needed to achieve the same result. However, this does not mean that in other cases the NNSE algorithm does not also perform well/better than the standard algorithm. For the CMOS Multiplier example, the algorithm achieved virtually identical or better results than the standard algorithms in all cases. Figure 3 shows the dependence of the minimized number of iteration steps required to compute a given circuit simulation using NNSE on the percentage deviation from the original circuit parameters. A trend can clearly be seen that our trained NNs cease to be relevant to the properties of the simulation of a given circuit from a certain point onwards. A possible solution would be to generate a larger neural network for a larger input dataset. However, this did not prove practical as the larger set degraded the overall accuracy of the NN and thus its performances against standard methods at low variations of circuit parameters.
The stability of the NI step computation can be seen from Figure 5. It shows the progression of several steps using the original algorithm shown in the figure by the dashed line, and the neural network based algorithm (solid line). This plot was generated for the variance of the circuit variables within 1 percent of the original circuit. There are noticeable differences from the original magnitude, but of such an order of magnitude that it does not yet affect the calculation process. What is important to note, however, is that the NNSA lacks the divergence of the original method. Hence, the NNSA curve is one value shorter than the original calculation process.

Computational Setup
All calculations were carried out by using a desktop computer with an Intel Core i9-9900K (3.60 GHz ×16), 31.3 GB of RAM memory and a NVIDIA GeForce RTX 2080 Ti graphics card with 11 GB memory.

Conclusions
We presented a NI step prediction mechanism NNSP based on NN. This network was built to define the integration of a given circuit over a range of circuit parameters and components used. The goal of the new algorithm was to estimate the integration step size based on the trained NN so as to minimize as much as possible the number of nonconvergences and restarts of the NI due to incorrect step size. The proposed method was tested in NgSpice, an open-source copy of SPICE. The algorithm has been implemented as a companion to the standard algorithms for computing numerical integration in SPICE, i.e. the TRAPEZ and GEAR methods. The original step computation was used to train the NN, and the new simulation option parameter was used to specify which trained NN should be used. The method was shown to be able to precompute step sizes for certain circuits if correctly classified. In these cases, it then allowed reducing the number of iterations. Each circuit had a specially trained network. For real use, the process needs to be supplemented with an algorithm or with another NN which would be used to correctly classify the circuit according to the components involved and then select the correct NN. Training a generic network that could then be used for all types of simulation problems is certainly also up for discussion.
Author Contributions: Neural network step estimator algorithm, analyzing current status of the NgSpice simulator, algorithm for training of neural network step estimator, implementation of the algorithm into NgSpice, creation of graphics D.Č.; artificial neural network arrangement, algorithms for numerical integration, comparing and testing methods, J.D. All authors have read and agreed to the published version of the manuscript.
Funding: This research was funded by the Czech Science Foundation under the grant No. GA20-26849S.