4.1. Simulation Model Setup
A bidirectional power supply simulation model consisting of a left-side full-bridge circuit and a right-side push-pull circuit was constructed using MATLABR2024a/Simulink software, as shown in
Figure 4. The specific parameter settings of the simulation model are shown in
Table 1. Based on this model, a control program was developed according to the optimized control strategy to simulate and analyze the system’s performance under different operating modes. The simulation parameters were set according to the actual design requirements, including input and output voltages, load resistance, and switching frequency, among others.
To validate the effectiveness of the proposed neural network-optimized control strategy for the hybrid full-bridge push-pull topology, we used MATLABR2024a/Simulink software to create a simulation model of a bidirectional power supply comprising a full-bridge on the left and a push-pull on the right.
This model accurately simulates the actual working conditions of the bidirectional DC-DC converter, including driving power switches, electromagnetic conversion in the high-frequency transformer and output filtering. The model components and parameter settings are as follows:
The main circuit model consists of a full-bridge section comprising four power switches (Q1, Q2, Q3 and Q4), taking into account the parasitic capacitance and body diode effects of each switch. The push-pull section generates a DC output through diode rectification and capacitor filtering to simulate the rectification and filtering processes of the actual circuit. The high-frequency transformer is configured with an appropriate turns ratio (N), as well as primary and secondary resonant inductances (L_r), filtering inductance (L_c) and capacitors (C), to simulate the electromagnetic conversion and filtering effects within the actual transformer.
The corresponding control program was developed based on the optimized neural network control strategy. This program ensures precise control of the phase shift angle, duty cycle, and timing, optimizing the system’s dynamic performance and steady-state accuracy. The neural network module was trained offline to obtain optimized weights and biases, which were then adjusted during the simulation to accommodate the system’s requirements under different operating modes.
The simulation parameters were set according to the actual design requirements, using the input DC voltage Vin and the desired output DC voltage Vo. Various load resistance values (5 Ω, 10 Ω, 15 Ω and 20 Ω) were selected to simulate the system’s performance under different load conditions. The switching frequency was chosen to optimize the balance between system efficiency, dynamic response and switching losses. During the initialization phase of the simulation process, the parameters were configured and the weights and biases of the neural network module were initialized. During the operational phase, the simulation was launched and the model generated driving signals based on the control program to manage the switching actions of the full-bridge and push-pull sections while monitoring key system parameters, such as current and voltage.
Key data such as inductive current, output voltage and phase shift angle were recorded during the simulation process, followed by subsequent analysis and processing. By constructing the above simulation model and setting the parameters, we were able to simulate the performance of the bidirectional DC-DC converter comprehensively under different operating modes.
4.2. Results Analysis
Based on observations of the neural network MSE curve, the Mean Squared Error (MSE) exhibited a significant exponential decline during the initial phase of training (iterations 1 to 2). This reflects the neural network’s ability to optimize rapidly within the parameter space. The sharp variation in this phase is due to the large deviations caused by the randomization of the initial weights. These initial weights are usually generated using a particular distribution, such as He or Xavier initialization, and often differ significantly from the actual data distribution. This results in a larger adjustment space for the gradient direction. The network uses the backpropagation algorithm to calculate the error gradient layer by layer via the chain rule, updating the weights along the negative gradient direction using gradient descent. This accumulation of parameter corrections results in a sharp decrease in error during the initial phase.
Firstly, in terms of neural network
MSE (see
Figure 5), it is clear that the system exhibits significant advantages after neural network optimization. As the number of iterations increases, the mean squared error (
MSE) value decreases rapidly and then stabilizes, indicating that the neural network effectively reduces prediction errors during training and quickly converges to the optimal solution. This suggests that the neural network is highly efficient at optimizing and has strong fitting capabilities when adjusting circuit parameters. During the initial training stage (iterations 1 to 2), the
MSE value dropped rapidly from approximately 0.02, demonstrating that the neural network efficiently optimizes the model parameters and enhances its ability to fit during this phase. The neural network quickly learns the effective parameters, leading to a substantial improvement in system performance over a short period. As training progresses (iterations 2 to 4), the rate of error reduction gradually decreases, but the
MSE remains low, confirming that the neural network does not overfit with continued training and that the optimization process remains stable.
In conclusion, the low MSE value indicates that the circuit can effectively reduce errors during optimization and highlights the rapid convergence of the neural network, enabling the system to achieve optimal performance with fewer iterations. This demonstrates the powerful advantages of neural networks in circuit optimization. Throughout the neural network training process, we closely monitored the variation in MSE. Initially, due to the random initialization of network weights and biases, the MSE was relatively high. However, as the number of iterations increased, the MSE decreased rapidly and stabilized. Specifically, in the first 100 iterations, the MSE fell from an initial value of around 0.02 to below 0.0005 and remained at this extremely low level in subsequent iterations. This indicates that the neural network has strong learning capabilities and optimization potential, enabling it to converge quickly to the global optimal solution. The low MSE value means that the neural network can accurately predict the system’s dynamic characteristics, providing a solid basis for precisely implementing the control strategy.
Figure 6 compares the dynamic response characteristics of the inductive current and the output voltage in a power electronics system, showing the difference between the initial and optimized states. This clearly illustrates the significant improvement in system performance that is achieved through parameter optimization or control strategy enhancement. A comparison of the inductive current shows that in the initial state, it exhibited intense high-frequency oscillations with peak values close to ±10 A. The waveform also showed significant phase lag and distortion. After optimization, however, the current waveform’s amplitude was reduced to approximately ±8 A, representing a 20% reduction in fluctuation amplitude. The waveform was also significantly smoother, indicating enhanced stability and disturbance rejection capability in the system’s dynamic response. This improvement may be due to the fine-tuning of control parameters (e.g., PID parameters) or the use of new control algorithms (e.g., model predictive control or sliding mode control), enabling the system to adjust the duty cycle more quickly in response to load transients and suppress overshoot and oscillations in the inductive current. Additionally, optimizing circuit parameters (such as inductance and switching frequency) may have reduced the system’s equivalent impedance, thereby decreasing current ripple.
In terms of the output voltage response, the initial voltage waveform exhibited significant overshoot and steady-state error, with peak fluctuations approaching 8 V and a longer settling time. Following optimization, however, the overshoot was significantly reduced, the steady-state fluctuations were controlled to within 2 V and the waveform was much closer to an ideal sine or DC reference signal. These improvements demonstrate breakthroughs in the system’s voltage regulation accuracy and dynamic tracking performance. The optimization strategy may have increased the system’s resilience to input disturbances and load transients by introducing feedforward compensation, load current observers or enhanced modulation techniques, such as space vector modulation. For example, increasing the bandwidth of the voltage and current loops in the dual-loop control enables the system to respond more quickly to changes in output voltage, reducing the time required to return to steady state. Furthermore, the optimized system is likely to exhibit lower total harmonic distortion (THD) in the output voltage, which is essential for the stable operation of sensitive loads, such as communication equipment and precision instruments.
In terms of energy conversion efficiency, reducing inductive current fluctuations lowers conduction losses in switching devices and copper losses in magnetic components. Meanwhile, improved output voltage stability reduces energy fluctuation losses at the load end. These effects combined significantly enhance the system’s overall efficiency, particularly in high-frequency switching scenarios. The optimization also has engineering value in terms of controlling temperature rise and extending the lifespan of the devices. Furthermore, the optimized system performed better in terms of electromagnetic compatibility (EMC), as the reduced rate of change of current and voltage (di/dt and dv/dt) helps to decrease the intensity of electromagnetic interference (EMI), thus simplifying the design of filtering circuits.
However, it is worth noting that slight oscillations remain in the optimized response curve, potentially due to hardware parameters (e.g., sensor accuracy, switching device response speed) or unmodeled dynamics (e.g., the nonlinear characteristics of parasitic resistances and capacitances). Future research could integrate hardware-in-the-loop (HIL) simulations and adaptive control algorithms to further explore the system’s performance potential. In summary, the optimization strategy demonstrates substantial enhancements in dynamic response, steady-state accuracy, and energy efficiency, offering a valuable reference for the high-performance design of power electronics systems.
Figure 7 shows the relationship between the optimal phase shift angle and efficiency when load resistance varies under different load conditions. This analysis provides valuable insights into the dynamic performance of power electronics converters and similar energy conversion systems. Two key characteristics can be observed from the figure: first, within the load resistance range of 5 Ω to 20 Ω, the system’s optimal phase shift angle remained relatively stable, with the phase shift angle at all four test points consistently around 0.1, with minimal fluctuation. Second, system efficiency showed a negative correlation with load resistance. As load resistance increased from 5 Ω to 20 Ω, efficiency decreased from approximately 0.025 to 0.012, representing a reduction of over 50%.
From a control strategy perspective, the stability of the phase shift angle can be attributed to the system’s closed-loop control algorithm. As the load resistance changes, the controller maintains stability by adjusting the switching timing in real-time. This is a characteristic commonly observed in resonant topologies, such as LLC converters, during parameter design. A stable phase shift angle helps to maintain soft-switching conditions and reduce switching losses. However, the chart shows that this control strategy does not stop the efficiency from decreasing. This suggests that as the load resistance increases, the system may be experiencing an issue with an increasing proportion of conduction losses. As the load resistance increases, the output current decreases accordingly; however, losses related to current, such as copper losses in magnetic components (e.g., transformers) and wiring, may not decrease proportionally. This leads to an increased proportion of losses under light-load conditions. Furthermore, if the system has fixed standby power losses, their impact becomes more pronounced under light-load conditions.
The pattern of the efficiency curve revealed the performance characteristics of the system across a wide load range. Under heavy load conditions (5 Ω), the system is likely operating at full capacity, where conduction losses dominate; however, the relatively high output power results in a higher efficiency value. As the load resistance increases to 20 Ω, the system enters a light-load state. While switching losses may decrease due to frequency or phase adjustments, fixed losses (e.g., control circuit power consumption and core losses) become more significant, causing a continuous decline in efficiency. This phenomenon is particularly important in the design of practical power supplies, where engineers must often strike a balance between heavy-load efficiency and light-load standby power losses.
From a perspective of optimizing system design, these data suggest that dynamically adjusting the operating parameters could improve efficiency across the full load range.
Through multiple iterations of neural network optimization under varying load conditions (
Figure 8), the system achieved more precise adaptive control of the output voltage. During the training process, the neural network gradually learned the mapping relationship between ‘load–optimal phase shift angle–output voltage’, such that under different loads ranging from 5 Ω to 20 Ω, the output voltage increased with load but maintained good periodicity and phase consistency overall, thus avoiding distortion and instability caused by load variations. Secondly, as the iterations progressed, the output voltage waveform, which was initially characterized by significant fluctuations, gradually converged to a smoother, more ideal sine wave-like periodic waveform. The voltage ripple rate decreased significantly, and the optimal voltage curve that emerged in later iterations exhibited higher peak values and the best stability. This demonstrates the neural network’s ability to effectively suppress voltage ripple, improve voltage utilization and enhance output quality under varying load conditions after several iterations, thus significantly improving the converter’s voltage output performance in dynamic load scenarios.
Following multiple iterations of neural network optimization, both the inductive current and the output voltage exhibited significant stabilization and regularity. Following 17 iterations (see
Figure 9), the inductive current exhibited a characteristic periodic ‘square-wave’ pattern. It rapidly rose to a peak value of approximately 8 A within each cycle then maintained a stable interval before decreasing symmetrically to a trough value of approximately −8 A, repeating this process. Throughout the entire time range of 0–500 μs, the peak and trough values and cycle durations remained highly consistent, indicating that the inductive current had converged to a regular, stable operating state. After 18 iterations (
Figure 10), the output voltage exhibited near-sinusoidal periodic fluctuations, slowly rising from 0 V to a peak of approximately 3.8 V, then dropping to a trough of about 0.8 V before rising again. The peak-to-trough values and cycle durations across multiple cycles were also highly consistent, demonstrating that the output voltage waveform was smooth with controllable fluctuations and good periodicity.
In summary, the advantages of the neural network iterations are primarily reflected in the transformation of the inductive current and output voltage waveforms from ‘potentially large fluctuations’ to ‘stable amplitude and consistent rhythm’ periodic waveforms. This significantly improves the stability of both the current and the voltage. Second, the system quickly converged to an ideal operating point under the given conditions, ensuring smooth, controllable energy transfer and enhancing the converter’s overall output quality and operational reliability.
Through neural network optimization, the convergence trend of the phase shift angle (
Figure 11) and the efficiency improvement curve (
Figure 12) demonstrate the significant impact of this optimization on the two parameters. In
Figure 11, the circles indicate the step-by-step reduction of the phase-shift angle after each iteration; the more circles, the greater the reduction. As the number of iterations increased, the phase shift angle gradually converged from an initial range of 0.26 to 0.3 to a range of 0.1 to 0.12. This indicates that the neural network optimized and adjusted the phase shift angle, thereby reducing the system’s energy loss and improving energy transfer efficiency. Regarding efficiency improvement, there was initially little change; however, from the third iteration onwards, efficiency increased rapidly from 0.1% to 1.9%, showing that neural network optimization significantly enhanced the system’s overall performance in the later stages. Convergence of the phase shift angle led to more precise energy transfer, reducing power losses and unnecessary current fluctuations within the system and directly contributing to efficiency improvement. As the phase shift angle was continuously optimized, current fluctuations and peak values gradually diminished, stabilizing energy transfer and effectively improving system efficiency. During the iterative optimization process, the convergence of the phase shift angle enhanced the circuit’s stability and directly facilitated efficiency improvements. Therefore, neural network optimization can be seen to achieve higher conversion efficiency by precisely controlling the phase shift angle, with the two factors forming a mutually reinforcing optimization relationship.
Next, a detailed analysis of the current optimization process was conducted. This focused on changes in peak current values and compared current at different stages of iteration. This revealed the profound impact of neural network optimization on current control. An in-depth examination of these data provided a more comprehensive understanding of the neural network’s role in optimizing current, especially with regard to reducing fluctuations, lowering peak values and improving stability.
Figure 13 shows the gradual decrease in the current peak, which was a direct result of the neural network optimization. Initially, the current peak stabilized at around 16 A. As the iterations progressed, the current peak began to decrease gradually after the third iteration, ultimately dropping to 15.2 A after the fourth iteration. This change indicates not only a reduction in the amplitude of current fluctuations, but also demonstrates the neural network’s fine control over the current during the optimization process. By adjusting the control parameters, the neural network effectively reduces excessive current fluctuations, thereby preventing high peak currents from imposing an excessive load on the system or causing energy wastage. This trend indicates an improvement in the system’s ability to control current fluctuations, stabilizing its response to load changes while reducing thermal management pressure and extending the system’s lifespan.
Figure 14 provides a more detailed analysis of current changes, illustrating how the current waveform and peak values evolve at different stages of iteration. In the early stages (iterations 2–10), the current waveform remained stable and consistent, peaking at around 8 A. This indicates that the current output characteristics are good and that the system is operating stably. During this phase, the periodicity and symmetry of the current waveform remain unaffected, indicating that the system’s current control is stable. However, as the number of iterations increased, particularly in the mid-stage (iterations 12–20), variations in the current waveform began to emerge, with the current peak tending towards 8A. Meanwhile, the waveform’s smoothness and symmetry improved. This suggests that the neural network optimization is beginning to exhibit more refined adjustment capabilities during this phase, enabling the system to control the current output more precisely and avoid excessive fluctuations and unnecessary load responses. In the later stages (iterations 22–30), the stability of the current waveform improved further, with reduced differences between peak and trough values. The current waveform also became smoother and more consistent. This is a particularly crucial change, as it indicates that the neural network effectively optimized the current regulation algorithm. The result is a more stable and symmetrical current output with smaller fluctuations. It also suggests that after multiple rounds of optimization, the system can precisely control the current waveform, reducing unnecessary energy losses and improving the stability of the current output. The enhanced smoothness and consistency of the current output make the system more stable under dynamic load conditions, enabling more efficient energy transmission and avoiding thermal losses and efficiency degradation caused by current fluctuations.
Combining the data from
Figure 13 and
Figure 14 for analysis reveals that neural network optimization reduces current peak value fluctuation and, more importantly, enhances current stability and controllability through precise adjustments. The connection between the two sets of data is particularly significant. The downward trend in current peak values reflects improved current regulation precision due to neural network optimization. The progression of the current waveform—from stable in the early stages to gradual changes in the mid-stage and finally a smoother waveform in the later stages—further demonstrates that the neural network gradually achieved more efficient and precise current control during optimization. Specifically, as the neural network optimization deepened, the system was able to reduce current peak values and avoid overloading or excessively high currents, thereby optimizing energy transmission efficiency.
Next, a detailed analysis of the voltage aspect was conducted, focusing on changes in the voltage ripple rate and the evolution of the voltage waveform. These demonstrate the significant impact of neural network optimization on voltage control. Analyzing these two datasets provides a deeper understanding of the advantages of neural networks in optimizing voltage output, and of how these changes enhance the system’s overall performance.
Figure 15 shows that the voltage ripple rate improved significantly after multiple iterations. During the initial stage (iterations 1–2), the voltage ripple rate remained stable at around 90%, indicating substantial voltage fluctuations and poor stability. However, between iterations 2 and 3, the voltage ripple rate decreased rapidly, eventually stabilizing at 30%, which showed a significant improvement in voltage stability. This suggests that as neural network optimization progresses, the system can control voltage fluctuations more precisely, reducing voltage ripple and improving the smoothness and quality of the voltage output. The sharp drop in ripple rate during the transition from iterations 2 to 3 reflects the neural network’s successful optimization of the voltage regulation algorithm, resulting in more uniform voltage fluctuations and further enhancing voltage stability.
Figure 15 illustrates the evolution of the voltage waveform at different stages of the iteration process. During the initial iterations (2–10), the waveform remained consistent, with peak and trough values of approximately 3.8 V and 0.8 V, respectively. Both the cycle duration and waveform shape were also consistent during this phase. This indicates that during this phase, the voltage output characteristics are stable and the circuit is operating smoothly, with no significant changes to the waveform and relatively uniform voltage fluctuations. However, as the iterations progressed into the mid-stage (iterations 12–20), the voltage waveform began to change. The peak gradually approached 3.8 V and the waveform became smoother and more symmetrical, leading to better voltage output stability. In the later stages (iterations 22–30), the voltage waveform showed significant optimization, with the peak reaching 7.5 V. Compared to earlier stages, the amplitude and stability of the waveform were significantly improved, indicating that following optimization, the stability and amplitude of the voltage output were greatly increased and the system’s voltage control capability had been significantly enhanced.
Combining the information from
Figure 15 and
Figure 16 shows that neural network optimization plays a key role in reducing voltage ripple and improving stability.
Figure 13 shows that the reduction in voltage ripple rate reflects a decrease in voltage fluctuations, while
Figure 14 shows that the improvement in waveform smoothness and stability significantly enhances the quality of the voltage output. This demonstrates that reducing voltage ripple directly contributes to smoothing the voltage waveform and enhancing stability, thereby improving the system’s power conversion efficiency and reliability.
In the previous analysis, we examined the optimization processes for current and voltage. We focused particularly on the impact of neural network iterations on current fluctuations, peak reduction, and the reduction in the voltage ripple rate. These optimizations enhanced the stability of the current and voltage, and laid the foundation for improving the system’s overall efficiency. Stable current and smooth voltage output ensure efficient energy conversion during the transfer process, minimizing unnecessary energy losses. This provides a robust basis for the subsequent analysis of power characteristics and system efficiency.
Figure 17 present the changes in power characteristics, normalized errors and efficiency during the iteration process, detailing the key role of neural network optimization in power conversion and enhancing system performance. Analyzing these data provides a clearer understanding of how neural networks optimize power characteristics through continuous iterations, reduce errors and significantly improve system efficiency.
Figure 17 first illustrates the power characteristics during the iteration process, particularly the changes in input and output power. According to the data, the input power remained at around 400 W during iterations 1 to 2, then dropped to approximately 350 W during iterations 2 to 3, and finally decreased further to 250 W after iterations 3 to 4.
This indicates that as the neural network iterates, the system’s input power requirement gradually decreases. The system continually optimizes the energy conversion efficiency in each iteration, reducing unnecessary energy consumption and achieving more efficient energy transfer and processing.
In terms of output power, it stabilized at around 0.5W in the early iterations (1–2). However, after the third iteration, the output power began to increase substantially, reaching approximately 2.5 W by the fourth iteration. This demonstrates that as the neural network optimizes, the system’s energy conversion capability is significantly enhanced and the output power increases substantially. This indicates that following optimization, the system can utilize input power more effectively, thereby improving the circuit’s power transmission capacity. From this perspective, the contrast between the decrease in input power and the increase in output power highlights the significant improvement in energy conversion efficiency brought about by neural network optimization. While input power is gradually reduced, the system is able to effectively increase the output power. This demonstrates that the neural network has successfully improved the power conversion efficiency through optimized control strategies, reducing energy losses and achieving more efficient power conversion.
Next, the code stores the mean squared error (MSE) at each iteration of the neural network training process. By dividing the training error at each node by the maximum value of the MSE, a normalized error value is obtained, which scales the error to a range from 0 to 1, facilitating comparison with other performance metrics. The normalized efficiency is calculated by dividing the efficiency value at each iteration by the initial efficiency, providing a relative efficiency value that reflects the improvement in performance during the optimization process.
Figure 18 illustrates the changes in normalized error and efficiency throughout the neural network optimization process. According to the data, the normalized error remains at an extremely low level, close to zero, throughout the entire iteration process. This indicates that the error in the system remains consistently stable and low throughout the iterations, demonstrating the neural network’s precise adjustment capability during the optimization process. The system is able to maintain an error close to zero through continuous optimization, reflecting the neural network’s exceptional data-fitting and model-optimization capabilities.
Meanwhile, changes in the normalized efficiency also demonstrate significant improvement during the optimization process. Normalized efficiency remained stable during iterations 1 to 3 and then began to increase gradually after the third iteration. Notably, there was a substantial improvement in efficiency in the fourth iteration. According to the data, the system’s normalized efficiency increases as the number of iterations rises. This indicates that during the optimization process, the neural network reduces errors and effectively enhances the system’s energy utilization efficiency. This increase in efficiency is consistent with the gradual decrease in input power and significant increase in output power, which further validates the critical role of neural network optimization in enhancing energy conversion efficiency.
Combining the content of
Figure 17 and
Figure 18 shows that neural network optimization achieved significant progress in power conversion efficiency and reduced system errors, improving overall performance. Specifically, the decrease in input power and the increase in output power complement each other, demonstrating that the system achieved more efficient energy conversion through optimized control strategies. Concurrently, the dual enhancement of low error and high efficiency further proves the neural network’s ability to refine adjustments during the optimization process, enabling the system to reach an optimal state in terms of energy conversion and efficiency improvement.