3.1. Generic Sources of Errors
The implementation of an accurate hardware-in-the-loop model must take into account the sources of errors, how large their contribution is and to what extent they can be reduced. The designer of a hardware-in-the-loop system must perform the trade-off between better accuracy and limiting factors, such as resource usage and timing constraints, which translate into cost of the system. Some sources of errors are generic to all hardware-in-the-loop models. They include the numerical method chosen, the value of , the numerical representation of state variables, and the modeling or not of n-th order losses.
The error arising from the numerical method chosen is well known, as explained in the previous section. A system performing fourth-order Runge–Kutta calculations will need more FPGA resources and more time per computing cycle than one using Euler method, as there are four steps to calculate a new value of the state variable.
The time step
is usually chosen to be the smallest possible in order to make a more accurate approximation of the differential equation and then reduce the error [
22]. This is also critical for real-time operation, since the smaller the
, the smaller the calculation delay will be. Limiting factors are the capabilities of the selected device and the time required for the selected calculation method. Once
is fixed, if the error obtained is not acceptable, then other design aspects must be changed: encoding with more bits and using more precise methods would be the most obvious choices.
The state variables must be represented with a finite number of bits. This causes a limitation in the resolution when storing the values and rounding losses after each calculation step. The more accuracy desired, the more bits that must be dedicated to storing the values. Furthermore, the chosen representation is also part of the design trade-off. Floating-point representations, typically based on the IEEE-754 standard, ease the coding process, since the designer does not need to consider signal dynamic ranges. The floating point mathematical libraries handle this transparently for the user. The cost in this case is a more complex logic circuit, which imposes limits on the device timing. Fixed-point representations deliver less complex logic circuits, which translates into faster calculations, but the designer must be careful in selecting the appropriate representation for each signal in the circuit.
Another source of error is the mis-detection of the duty cycle: the state of the switch or switches in the converter is sampled periodically. The sampling frequency can be equal to the frequency of the model update, but oversampling is usually applied [
25,
26]. If the sampling frequency is relatively close to the switching frequency, it might happen that the detected duty cycle is different from the real duty cycle. Oversampling techniques can partially reduce this issue, making the hardware more complex as the extra information must be computed by the model.
The modeling of first- and higher-order losses also brings additional accuracy to the hardware-in-the-loop system, at the cost of additional computations per cycle. For the sake of simplicity, they were not considered here.
3.2. Model-Specific Sources of Error
There are other sources of errors, specific to each circuit model. They must be studied case by case. In the case of the synchronous buck converter, or other converters with two synchronous complementary switches, such as half-bridge modules, relevant errors are caused when the inductor current crosses zero during a deadtime. It was shown in [
24] that the expected accuracy of the fourth-order Runge–Kutta method was not reached because the inductor current modeling did not adequately represent these zero-crossing events. This issue cannot be fixed with oversampling techniques, as the source of the error is not in the inputs sampling but in the internal calculations when the equations which are applied change depending on the current sign. Therefore, this detected limitation makes it necessary to characterize the error, find the circumstances under which it affects more the calculation results, and propose a solution.
During a deadtime, no energy is supplied by the voltage source. Only the inductor and capacitor provide energy to the load, energy that they have previously stored. Thus, the inductor current can only decrease until it reaches zero and remains at that state until the voltage source provides energy after one of the switches goes into closed state. The simulation algorithm must take this into account and provide the equivalent to this physical situation. The algorithm calculates the state variables and at fixed time steps.
The source of error in the simulation comes from the fact that at some point in time the inductor current reaches zero and must remain at zero. This moment does not necessarily happen at the time of the step calculation but can correspond to an instant which from the physical point of view lies within the simulation step.
A straightforward approach to handle this situation is to set the inductor current to zero when it reaches zero during a deadtime.
Figure 2 shows this situation: the inductor current reaches zero at some point within the
time step, so it is forced to be zero at
.
The calculation algorithm for each time step of the numerical simulation can be expressed with the pseudo-code of Algorithm 2.
Algorithm 2 State variable calculation with basic saturation behavior |
- 1:
calculateRK4() - 2:
if AND AND then ▹ iL crosses zero during a deadtime - 3:
- 4:
end if
|
The value of at is zero, in accordance with the physical equivalence of the model, so it might be argued that this solution is close enough because it delivers the correct value of at the moment . However, both variables and are interdependent: the calculation of at depends on the value of , and the value of at the following cycle depends on the value of at , and so on successively. This means that if the error in calculation of at is significant, it will propagate onto the next cycle , and then over the whole model.
To verify the magnitude of the error introduced by this approach, the circuit under analysis was simulated with the values of
Table 1. This buck converter is a real battery former that is used to make the initial charge of batteries. The three values for the resistor R correspond to the converter providing power to the load in different situations: high current, medium current, and low current. The battery-forming application requires low current at initial and final charging stages, as the battery impedance is high at those points, and high current during the intermediate forming stage. Although the impedance of the real batteries used in the experiments dropped down to 1.2
, experiments showed that any value of R lower than 7.5
will not cause zero crossings in the current. This means that average inductor currents over 1.33 A do not cause deadtimes. Therefore, the value of 7.5
is used as the high-current case.
As stated before, when the load requires a higher current (modeled with R = 7.5 ), zero-crossing events during deadtimes do not take place. This means that the “if” condition in Algorithm 2 is never fulfilled and the inductor current is never forced to zero.
This can be used as the reference case to assess the contribution of the zero-crossing events to the global error. As zero-crossing events do not happen, the main contributor to the error must be the discretization inaccuracies inherent to the fourth-order Runge–Kutta method used.
When the load requires less current (modeled with
R = 30
), the zero-crossing events take place during a significant number of the simulated cycles. In 50 switching cycles, there are 39 where the current crosses zero during a deadtime, 78 percent of the total. This means that the “if–then” block of Algorithm 2 is entered 78 percent of the simulation cycles.
Figure 3 shows the simulated values of the capacitor voltage and inductor current in this case. An intermediate situation was also modeled using
R = 15
. In this case, only one switching cycle has a zero-crossing event.
The reference for the comparison was the simulation with a as low as 10 ns. As explained above, the smaller the , the smaller inherent error and the lower errors caused by mis-detection of the duty cycle or zero-crossing events. The of 10 ns is three orders of magnitude lower than the deadtimes, which are 10 s long. This ensures that any inaccuracy in the modeling of deadtimes or any other event is very small in the reference. Then, simulations with larger are performed, and they are compared with the reference. The values chosen for were 100 ns, 1 s, and 10 s. These values were chosen as many commercial systems achieve a maximum simulation step between 500 ns and 1 s. The case with the largest matches the length of the deadtime. In this particular case, it means that the state variables are calculated only once during one complete deadtime. If the main contributor to the error is the inaccuracy caused by the value of , then with any of the three resistance values, the difference between a reference with s and the calculation with s should be of the same order of magnitude.
The result for the simulation with no zero crossing events is that the calculation with
s compared with the reference obtained with
s has an error of
A for
and
V for
. However, in both cases with zero-crossing events, the simulation error grows six orders of magnitude larger at
s, even in the case with only one event in fifty cycles. The conclusion is that this difference in error values is caused by the miscalculation of
and
at the zero-crossing events, because other contributors to the error do not differ between these three calculations: fourth-order Runge–Kutta was used, with 64-bit floating-point representation and no modeling of first-order losses in all of them.
Table 2 summarizes these results.
This can be further verified by comparing the error of the three models in calculation of
and
as a function of
in a log-log graphic for the three values of
R (
Figure 4). In the reference case with no current zero-crossing events, the relation between global error and
will be a line with slope 4, as it corresponds to a numerical method of order
.
It can be seen that the model with approximates the ideal behavior. The error in calculation of both the capacitor voltage and the inductor current resembles a line with slope 4. Values of smaller than 10 ns are not used because the resolution limits of the 64-bit floating point are reached and the smallest representable number (machine epsilon) cannot represent properly the very small values needed for an accurate calculation. Values larger than s were also not used because the deadtime of s would not be detected by the model. The error curves of the other values of R do not resemble a line with a slope of 4. This points to the fact that the miscalculation of during deadtimes offsets the high accuracy provided by the fourth-order Runge–Kutta method. As the variables and are interdependent, the error caused by the miscalculation of propagates to and the curves are very similar.
In real conditions, HIL systems do not require such a level of accuracy (around or lower), as other sources of errors are also present. However, this source of error prevents the use of Runge–Kutta or other accurate numerical methods that become necessary in more complex models for integration steps around s, which is a typical commercial value. Therefore, the goal of this proposal is to enable the benefits of accurate numerical methods, such as Runge–Kutta, even when relatively high integration steps are used during deadtime events.