3.1. Generic Sources of Errors
The implementation of an accurate hardwareintheloop model must take into account the sources of errors, how large their contribution is and to what extent they can be reduced. The designer of a hardwareintheloop system must perform the tradeoff between better accuracy and limiting factors, such as resource usage and timing constraints, which translate into cost of the system. Some sources of errors are generic to all hardwareintheloop models. They include the numerical method chosen, the value of $dt$, the numerical representation of state variables, and the modeling or not of nth order losses.
The error arising from the numerical method chosen is well known, as explained in the previous section. A system performing fourthorder Runge–Kutta calculations will need more FPGA resources and more time per computing cycle than one using Euler method, as there are four steps to calculate a new value of the state variable.
The time step
$dt$ is usually chosen to be the smallest possible in order to make a more accurate approximation of the differential equation and then reduce the error [
22]. This is also critical for realtime operation, since the smaller the
$dt$, the smaller the calculation delay will be. Limiting factors are the capabilities of the selected device and the time required for the selected calculation method. Once
$dt$ is fixed, if the error obtained is not acceptable, then other design aspects must be changed: encoding with more bits and using more precise methods would be the most obvious choices.
The state variables must be represented with a finite number of bits. This causes a limitation in the resolution when storing the values and rounding losses after each calculation step. The more accuracy desired, the more bits that must be dedicated to storing the values. Furthermore, the chosen representation is also part of the design tradeoff. Floatingpoint representations, typically based on the IEEE754 standard, ease the coding process, since the designer does not need to consider signal dynamic ranges. The floating point mathematical libraries handle this transparently for the user. The cost in this case is a more complex logic circuit, which imposes limits on the device timing. Fixedpoint representations deliver less complex logic circuits, which translates into faster calculations, but the designer must be careful in selecting the appropriate representation for each signal in the circuit.
Another source of error is the misdetection of the duty cycle: the state of the switch or switches in the converter is sampled periodically. The sampling frequency can be equal to the frequency of the model update, but oversampling is usually applied [
25,
26]. If the sampling frequency is relatively close to the switching frequency, it might happen that the detected duty cycle is different from the real duty cycle. Oversampling techniques can partially reduce this issue, making the hardware more complex as the extra information must be computed by the model.
The modeling of first and higherorder losses also brings additional accuracy to the hardwareintheloop system, at the cost of additional computations per cycle. For the sake of simplicity, they were not considered here.
3.2. ModelSpecific Sources of Error
There are other sources of errors, specific to each circuit model. They must be studied case by case. In the case of the synchronous buck converter, or other converters with two synchronous complementary switches, such as halfbridge modules, relevant errors are caused when the inductor current crosses zero during a deadtime. It was shown in [
24] that the expected accuracy of the fourthorder Runge–Kutta method was not reached because the inductor current modeling did not adequately represent these zerocrossing events. This issue cannot be fixed with oversampling techniques, as the source of the error is not in the inputs sampling but in the internal calculations when the equations which are applied change depending on the current sign. Therefore, this detected limitation makes it necessary to characterize the error, find the circumstances under which it affects more the calculation results, and propose a solution.
During a deadtime, no energy is supplied by the voltage source. Only the inductor and capacitor provide energy to the load, energy that they have previously stored. Thus, the inductor current can only decrease until it reaches zero and remains at that state until the voltage source provides energy after one of the switches goes into closed state. The simulation algorithm must take this into account and provide the equivalent to this physical situation. The algorithm calculates the state variables ${i}_{L}$ and ${v}_{C}$ at fixed time steps.
The source of error in the simulation comes from the fact that at some point in time the inductor current reaches zero and must remain at zero. This moment does not necessarily happen at the time of the step calculation but can correspond to an instant which from the physical point of view lies within the simulation step.
A straightforward approach to handle this situation is to set the inductor current to zero when it reaches zero during a deadtime.
Figure 2 shows this situation: the inductor current reaches zero at some point within the
$dt$ time step, so it is forced to be zero at
${t}_{n+1}$.
The calculation algorithm for each time step of the numerical simulation can be expressed with the pseudocode of Algorithm 2.
Algorithm 2 State variable calculation with basic ${i}_{L}$ saturation behavior 
 1:
$(i{L}_{n+1},v{C}_{n+1})\leftarrow $calculateRK4($i{L}_{n},v{C}_{n},dt,R,L,C,S1,S2,{v}_{s}$)  2:
if$\text{}sign\left(i{L}_{n+1}\right)\ne sign\left(i{L}_{n}\right)$ AND $S1=open$ AND $S2=open$ then ▹ iL crosses zero during a deadtime  3:
$i{L}_{n+1}\leftarrow 0$  4:
end if

The value of ${i}_{L}$ at ${t}_{n+1}$ is zero, in accordance with the physical equivalence of the model, so it might be argued that this solution is close enough because it delivers the correct value of ${i}_{L}$ at the moment ${t}_{n+1}$. However, both variables ${i}_{L}$ and ${v}_{C}$ are interdependent: the calculation of ${v}_{C}$ at ${t}_{n+1}$ depends on the value of ${i}_{L}$, and the value of ${i}_{L}$ at the following cycle ${t}_{n+2}$ depends on the value of ${v}_{C}$ at ${t}_{n+1}$, and so on successively. This means that if the error in calculation of ${v}_{C}$ at ${t}_{n+1}$ is significant, it will propagate onto ${i}_{L}$ the next cycle ${t}_{n+2}$, and then over the whole model.
To verify the magnitude of the error introduced by this approach, the circuit under analysis was simulated with the values of
Table 1. This buck converter is a real battery former that is used to make the initial charge of batteries. The three values for the resistor R correspond to the converter providing power to the load in different situations: high current, medium current, and low current. The batteryforming application requires low current at initial and final charging stages, as the battery impedance is high at those points, and high current during the intermediate forming stage. Although the impedance of the real batteries used in the experiments dropped down to 1.2
$\Omega $, experiments showed that any value of R lower than 7.5
$\Omega $ will not cause zero crossings in the current. This means that average inductor currents over 1.33 A do not cause deadtimes. Therefore, the value of 7.5
$\Omega $ is used as the highcurrent case.
As stated before, when the load requires a higher current (modeled with R = 7.5 $\Omega $), zerocrossing events during deadtimes do not take place. This means that the “if” condition in Algorithm 2 is never fulfilled and the inductor current is never forced to zero.
This can be used as the reference case to assess the contribution of the zerocrossing events to the global error. As zerocrossing events do not happen, the main contributor to the error must be the discretization inaccuracies inherent to the fourthorder Runge–Kutta method used.
When the load requires less current (modeled with
R = 30
$\Omega $), the zerocrossing events take place during a significant number of the simulated cycles. In 50 switching cycles, there are 39 where the current crosses zero during a deadtime, 78 percent of the total. This means that the “if–then” block of Algorithm 2 is entered 78 percent of the simulation cycles.
Figure 3 shows the simulated values of the capacitor voltage and inductor current in this case. An intermediate situation was also modeled using
R = 15
$\Omega $. In this case, only one switching cycle has a zerocrossing event.
The reference for the comparison was the simulation with a $dt$ as low as 10 ns. As explained above, the smaller the $dt$, the smaller inherent error and the lower errors caused by misdetection of the duty cycle or zerocrossing events. The $dt$ of 10 ns is three orders of magnitude lower than the deadtimes, which are 10 $\mathsf{\mu}$s long. This ensures that any inaccuracy in the modeling of deadtimes or any other event is very small in the reference. Then, simulations with larger $dt$ are performed, and they are compared with the reference. The values chosen for $dt$ were 100 ns, 1 $\mathsf{\mu}$s, and 10 $\mathsf{\mu}$s. These values were chosen as many commercial systems achieve a maximum simulation step between 500 ns and 1 $\mathsf{\mu}$s. The case with the largest $dt$ matches the length of the deadtime. In this particular case, it means that the state variables are calculated only once during one complete deadtime. If the main contributor to the error is the inaccuracy caused by the value of $dt$, then with any of the three resistance values, the difference between a reference with $dt=10\phantom{\rule{4pt}{0ex}}\mathrm{n}$s and the calculation with $dt=1\phantom{\rule{4pt}{0ex}}\mathsf{\mu}$s should be of the same order of magnitude.
The result for the simulation with no zero crossing events is that the calculation with
$dt=1\phantom{\rule{4pt}{0ex}}\mathsf{\mu}$s compared with the reference obtained with
$dt=10\phantom{\rule{4pt}{0ex}}n$s has an error of
$4.57\times {10}^{12}$ A for
${i}_{L}$ and
$2.42\times {10}^{11}$ V for
${v}_{C}$. However, in both cases with zerocrossing events, the simulation error grows six orders of magnitude larger at
$dt=1\phantom{\rule{4pt}{0ex}}\mathsf{\mu}$s, even in the case with only one event in fifty cycles. The conclusion is that this difference in error values is caused by the miscalculation of
${i}_{L}$ and
${v}_{C}$ at the zerocrossing events, because other contributors to the error do not differ between these three calculations: fourthorder Runge–Kutta was used, with 64bit floatingpoint representation and no modeling of firstorder losses in all of them.
Table 2 summarizes these results.
This can be further verified by comparing the error of the three models in calculation of
${v}_{C}$ and
${i}_{L}$ as a function of
$dt$ in a loglog graphic for the three values of
R (
Figure 4). In the reference case with no current zerocrossing events, the relation between global error and
$dt$ will be a line with slope 4, as it corresponds to a numerical method of order
$\mathcal{O}\left(d{t}^{4}\right)$.
It can be seen that the model with $R=7.5\phantom{\rule{4pt}{0ex}}\Omega $ approximates the ideal behavior. The error in calculation of both the capacitor voltage ${v}_{C}$ and the inductor current ${i}_{L}$ resembles a line with slope 4. Values of $dt$ smaller than 10 ns are not used because the resolution limits of the 64bit floating point are reached and the smallest representable number (machine epsilon) cannot represent properly the very small values needed for an accurate calculation. Values larger than $10\phantom{\rule{4pt}{0ex}}\mathsf{\mu}$s were also not used because the deadtime of $10\phantom{\rule{4pt}{0ex}}\mathsf{\mu}$s would not be detected by the model. The error curves of the other values of R do not resemble a line with a slope of 4. This points to the fact that the miscalculation of ${i}_{L}$ during deadtimes offsets the high accuracy provided by the fourthorder Runge–Kutta method. As the variables ${i}_{L}$ and ${v}_{C}$ are interdependent, the error caused by the miscalculation of ${i}_{L}$ propagates to ${v}_{C}$ and the curves are very similar.
In real conditions, HIL systems do not require such a level of accuracy (around ${10}^{5}$ or lower), as other sources of errors are also present. However, this source of error prevents the use of Runge–Kutta or other accurate numerical methods that become necessary in more complex models for integration steps around $1\phantom{\rule{4pt}{0ex}}\mathsf{\mu}$s, which is a typical commercial value. Therefore, the goal of this proposal is to enable the benefits of accurate numerical methods, such as Runge–Kutta, even when relatively high integration steps are used during deadtime events.